# Rocm > :description: ROCm compatibility matrix --- .. meta:: :description: ROCm compatibility matrix :keywords: GPU, architecture, hardware, compatibility, system, requirements, components, libraries ************************************************************************************** Compatibility matrix ************************************************************************************** Use this matrix to view the ROCm compatibility and system requirements across successive major and minor releases. You can also refer to the :ref:`past versions of ROCm compatibility matrix`. GPUs listed in the following table support compute workloads (no display information or graphics). If you’re using ROCm with AMD Radeon GPUs or Ryzen APUs for graphics workloads, see the :doc:`Use ROCm on Radeon and Ryzen ` to verify compatibility and system requirements. .. |br| raw:: html
.. container:: format-big-table .. csv-table:: :header: "ROCm Version", "7.1.1", "7.1.0", "6.4.0" :stub-columns: 1 :ref:`Operating systems & kernels ` [#os-compatibility]_,Ubuntu 24.04.3,Ubuntu 24.04.3,Ubuntu 24.04.2 ,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5 ,"RHEL 10.1, 10.0, 9.7, |br| 9.6, 9.4","RHEL 10.0, 9.6, 9.4","RHEL 9.5, 9.4" ,RHEL 8.10,RHEL 8.10,RHEL 8.10 ,SLES 15 SP7,SLES 15 SP7,SLES 15 SP6 ,"Oracle Linux 10, 9, 8","Oracle Linux 10, 9, 8","Oracle Linux 9, 8" ,"Debian 13, 12","Debian 13, 12",Debian 12 ,,,Azure Linux 3.0 ,Rocky Linux 9,Rocky Linux 9, ,.. _architecture-support-compatibility-matrix:,, :doc:`Architecture `,CDNA4,CDNA4, ,CDNA3,CDNA3,CDNA3 ,CDNA2,CDNA2,CDNA2 ,CDNA,CDNA,CDNA ,RDNA4,RDNA4, ,RDNA3,RDNA3,RDNA3 ,RDNA2,RDNA2,RDNA2 ,.. _gpu-support-compatibility-matrix:,, :doc:`GPU / LLVM target ` [#gpu-compatibility]_,gfx950,gfx950, ,gfx1201,gfx1201, ,gfx1200,gfx1200, ,gfx1101,gfx1101, ,gfx1100,gfx1100,gfx1100 ,gfx1030,gfx1030,gfx1030 ,gfx942,gfx942,gfx942 ,gfx90a,gfx90a,gfx90a ,gfx908,gfx908,gfx908 ,,, FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix:,, :doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.9, 2.8, 2.7","2.8, 2.7, 2.6","2.6, 2.5, 2.4, 2.3" :doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.20.0, 2.19.1, 2.18.1","2.20.0, 2.19.1, 2.18.1","2.18.1, 2.17.1, 2.16.2" :doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.7.1,0.7.1,0.4.35 :doc:`DGL <../compatibility/ml-compatibility/dgl-compatibility>` [#dgl_compat]_,N/A,N/A,2.4.0 :doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat]_,N/A,N/A,b5997 `ONNX Runtime `_,1.23.1,1.22.0,1.20.0 ,,, THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix:,, `UCC `_,>=1.4.0,>=1.4.0,>=1.3.0 `UCX `_,>=1.17.0,>=1.17.0,>=1.15.0 ,,, THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix:,, Thrust,2.8.5,2.8.5,2.5.0 CUB,2.8.5,2.8.5,2.5.0 ,,, DRIVER & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,, :doc:`AMD GPU Driver `,"30.20.1, 30.20.0 [#mi325x_KVM]_, |br| 30.10.2, 30.10.1 [#driver_patch]_, |br| 30.10, 6.4.x","30.20.0 [#mi325x_KVM]_, 30.10.2, |br| 30.10.1 [#driver_patch]_, 30.10, 6.4.x","6.4.x, 6.3.x, 6.2.x, 6.1.x" ,,, ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix:,, :doc:`Composable Kernel `,1.1.0,1.1.0,1.1.0 :doc:`MIGraphX `,2.14.0,2.14.0,2.12.0 :doc:`MIOpen `,3.5.1,3.5.1,3.4.0 :doc:`MIVisionX `,3.4.0,3.4.0,3.2.0 :doc:`rocAL `,2.4.0,2.4.0,2.2.0 :doc:`rocDecode `,1.4.0,1.4.0,0.10.0 :doc:`rocJPEG `,1.2.0,1.2.0,0.8.0 :doc:`rocPyDecode `,0.7.0,0.7.0,0.3.1 :doc:`RPP `,2.1.0,2.1.0,1.9.10 ,,, COMMUNICATION,.. _commlibs-support-compatibility-matrix:,, :doc:`RCCL `,2.27.7,2.27.7,2.22.3 :doc:`rocSHMEM `,3.1.0,3.0.0,2.0.0 ,,, MATH LIBS,.. _mathlibs-support-compatibility-matrix:,, `half `_ ,1.12.0,1.12.0,1.12.0 :doc:`hipBLAS `,3.1.0,3.1.0,2.4.0 :doc:`hipBLASLt `,1.1.0,1.1.0,0.12.0 :doc:`hipFFT `,1.0.21,1.0.21,1.0.18 :doc:`hipfort `,0.7.1,0.7.1,0.6.0 :doc:`hipRAND `,3.1.0,3.1.0,2.12.0 :doc:`hipSOLVER `,3.1.0,3.1.0,2.4.0 :doc:`hipSPARSE `,4.1.0,4.1.0,3.2.0 :doc:`hipSPARSELt `,0.2.5,0.2.5,0.2.3 :doc:`rocALUTION `,4.0.1,4.0.1,3.2.2 :doc:`rocBLAS `,5.1.1,5.1.0,4.4.0 :doc:`rocFFT `,1.0.35,1.0.35,1.0.32 :doc:`rocRAND `,4.1.0,4.1.0,3.3.0 :doc:`rocSOLVER `,3.31.0,3.31.0,3.28.0 :doc:`rocSPARSE `,4.1.0,4.1.0,3.4.0 :doc:`rocWMMA `,2.1.0,2.0.0,1.7.0 :doc:`Tensile `,4.44.0,4.44.0,4.43.0 ,,, PRIMITIVES,.. _primitivelibs-support-compatibility-matrix:,, :doc:`hipCUB `,4.1.0,4.1.0,3.4.0 :doc:`hipTensor `,2.0.0,2.0.0,1.5.0 :doc:`rocPRIM `,4.1.0,4.1.0,3.4.0 :doc:`rocThrust `,4.1.0,4.1.0,3.3.0 ,,, SUPPORT LIBS,,, `hipother `_,7.1.52802,7.1.25424,6.4.43482 `rocm-core `_,7.1.1,7.1.0,6.4.0 `ROCT-Thunk-Interface `_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_ ,,, SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix:,, :doc:`AMD SMI `,26.2.0,26.1.0,25.3.0 :doc:`ROCm Data Center Tool `,1.2.0,1.2.0,0.3.0 :doc:`rocminfo `,1.0.0,1.0.0,1.0.0 :doc:`ROCm SMI `,7.8.0,7.8.0,7.5.0 :doc:`ROCm Validation Suite `,1.3.0,1.2.0,1.1.0 ,,, PERFORMANCE TOOLS,,, :doc:`ROCm Bandwidth Test `,2.6.0,2.6.0,1.4.0 :doc:`ROCm Compute Profiler `,3.3.1,3.3.0,3.1.0 :doc:`ROCm Systems Profiler `,1.2.1,1.2.0,1.0.0 :doc:`ROCProfiler `,2.0.70101,2.0.70100,2.0.60400 :doc:`ROCprofiler-SDK `,1.0.0,1.0.0,0.6.0 :doc:`ROCTracer `,4.1.70101,4.1.70100,4.1.60400 ,,, DEVELOPMENT TOOLS,,, :doc:`HIPIFY `,20.0.0,20.0.0,19.0.0 :doc:`ROCm CMake `,0.14.0,0.14.0,0.14.0 :doc:`ROCdbgapi `,0.77.4,0.77.4,0.77.2 :doc:`ROCm Debugger (ROCgdb) `,16.3.0,16.3.0,15.2.0 `rocprofiler-register `_,0.5.0,0.5.0,0.4.0 :doc:`ROCr Debug Agent `,2.1.0,2.1.0,2.0.4 ,,, COMPILERS,.. _compilers-support-compatibility-matrix:,, `clang-ocl `_,N/A,N/A,N/A :doc:`hipCC `,1.1.1,1.1.1,1.1.1 `Flang `_,20.0.025444,20.0.025425,19.0.0.25133 :doc:`llvm-project `,20.0.025444,20.0.025425,19.0.0.25133 `OpenMP `_,20.0.025444,20.0.025425,19.0.0.25133 ,,, RUNTIMES,.. _runtime-support-compatibility-matrix:,, :doc:`AMD CLR `,7.1.52802,7.1.25424,6.4.43482 :doc:`HIP `,7.1.52802,7.1.25424,6.4.43482 `OpenCL Runtime `_,2.0.0,2.0.0,2.0.0 :doc:`ROCr Runtime `,1.18.0,1.18.0,1.15.0 .. rubric:: Footnotes .. [#os-compatibility] Some operating systems are supported on limited GPUs. For detailed information, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.1 `__, `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. .. [#gpu-compatibility] Some GPUs have limited operating system support. For detailed information, see the latest :ref:`supported_GPUs`. For version specific information, see `ROCm 7.1.1 `__, `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. .. [#dgl_compat] DGL is supported only on ROCm 7.0.0, ROCm 6.4.3 and ROCm 6.4.0. .. [#llama-cpp_compat] llama.cpp is supported only on ROCm 7.0.0 and ROCm 6.4.x. .. [#mi325x_KVM] For AMD Instinct MI325X KVM SR-IOV users, do not use AMD GPU Driver (amdgpu) 30.20.0. .. [#driver_patch] AMD GPU Driver (amdgpu) 30.10.1 is a quality release that resolves an issue identified in the 30.10 release. There are no other significant changes or feature additions in ROCm 7.0.1 from ROCm 7.0.0. AMD GPU Driver (amdgpu) 30.10.1 is compatible with ROCm 7.0.1 and ROCm 7.0.0. .. [#kfd_support] As of ROCm 6.4.0, forward and backward compatibility between the AMD GPU Driver (amdgpu) and its user space software is provided up to a year apart. For earlier ROCm releases, the compatibility is provided for +/- 2 releases. The supported user space versions on this page were accurate as of the time of initial ROCm release. For the most up-to-date information, see the latest version of this information at `User and AMD GPU Driver support matrix `_. .. [#ROCT-rocr] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package. .. _OS-kernel-versions: Operating systems, kernel and Glibc versions ********************************************* For detailed information on operating system supported on ROCm 7.1.1 and associated Kernel and Glibc version, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. .. note:: * See `Red Hat Enterprise Linux Release Dates `_ to learn about the specific kernel versions supported on Red Hat Enterprise Linux (RHEL). * See `List of SUSE Linux Enterprise Server kernel `_ to learn about the specific kernel version supported on SUSE Linux Enterprise Server (SLES). .. Footnotes and ref anchors in below historical tables should be appended with "-past-60", to differentiate from the footnote references in the above, latest, compatibility matrix. It also allows to easily find & replace. An easy way to work is to download the historical.CSV file, and update open it in excel. Then when content is ready, delete the columns you don't need, to build the current compatibility matrix to use in above table. Find & replace all instances of "-past-60" to make it ready for above table. .. _past-rocm-compatibility-matrix: Past versions of ROCm compatibility matrix *************************************************** Expand for full historical view of: .. dropdown:: ROCm 6.0 - Present You can `download the entire .csv <../downloads/compatibility-matrix-historical-6.0.csv>`_ for offline reference. .. csv-table:: :file: compatibility-matrix-historical-6.0.csv :header-rows: 1 :stub-columns: 1 .. rubric:: Footnotes .. [#os-compatibility-past-60] Some operating systems are supported on limited GPUs. For detailed information, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.1 `__, `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. .. [#gpu-compatibility-past-60] Some GPUs have limited operating system support. For detailed information, see the latest :ref:`supported_GPUs`. For version specific information, see `ROCm 7.1.1 `__, `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. .. [#tf-mi350-past-60] TensorFlow 2.17.1 is not supported on AMD Instinct MI350 Series GPUs. Use TensorFlow 2.19.1 or 2.18.1 with MI350 Series GPUs instead. .. [#verl_compat-past-60] verl is supported only on ROCm 7.0.0 and 6.2.0. .. [#stanford-megatron-lm_compat-past-60] Stanford Megatron-LM is supported only on ROCm 6.3.0. .. [#dgl_compat-past-60] DGL is supported only on ROCm 7.0.0, ROCm 6.4.3 and ROCm 6.4.0. .. [#megablocks_compat-past-60] Megablocks is supported only on ROCm 6.3.0. .. [#ray_compat-past-60] Ray is supported only on ROCm 6.4.1. .. [#llama-cpp_compat-past-60] llama.cpp is supported only on ROCm 7.0.0 and 6.4.x. .. [#flashinfer_compat-past-60] FlashInfer is supported only on ROCm 6.4.1. .. [#mi325x_KVM-past-60] For AMD Instinct MI325X KVM SR-IOV users, do not use AMD GPU Driver (amdgpu) 30.20.0. .. [#driver_patch-past-60] AMD GPU Driver (amdgpu) 30.10.1 is a quality release that resolves an issue identified in the 30.10 release. There are no other significant changes or feature additions in ROCm 7.0.1 from ROCm 7.0.0. AMD GPU Driver (amdgpu) 30.10.1 is compatible with ROCm 7.0.1 and ROCm 7.0.0. .. [#kfd_support-past-60] As of ROCm 6.4.0, forward and backward compatibility between the AMD GPU Driver (amdgpu) and its user space software is provided up to a year apart. For earlier ROCm releases, the compatibility is provided for +/- 2 releases. The supported user space versions on this page were accurate as of the time of initial ROCm release. For the most up-to-date information, see the latest version of this information at `User and AMD GPU Driver support matrix `_. .. [#ROCT-rocr-past-60] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package. --- :orphan: .. meta:: :description: Deep Graph Library (DGL) compatibility :keywords: GPU, CPU, deep graph library, DGL, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************** DGL compatibility ******************************************************************************** Deep Graph Library (`DGL `__) is an easy-to-use, high-performance, and scalable Python package for deep learning on graphs. DGL is framework agnostic, meaning that if a deep graph model is a component in an end-to-end application, the rest of the logic is implemented using PyTorch. DGL provides a high-performance graph object that can reside on either CPUs or GPUs. It bundles structural data features for better control and provides a variety of functions for computing with graph objects, including efficient and customizable message passing primitives for Graph Neural Networks. Support overview ================================================================================ - The ROCm-supported version of DGL is maintained in the official `https://github.com/ROCm/dgl `__ repository, which differs from the `https://github.com/dmlc/dgl `__ upstream repository. - To get started and install DGL on ROCm, use the prebuilt :ref:`Docker images `, which include ROCm, DGL, and all required dependencies. - See the :doc:`ROCm DGL installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- DGL is supported on `ROCm 7.0.0 `__, `ROCm 6.4.3 `__, and `ROCm 6.4.0 `__. Supported devices -------------------------------------------------------------------------------- **Officially Supported**: AMD Instinct™ MI300X, MI250X .. _dgl-recommendations: Use cases and recommendations ================================================================================ DGL can be used for Graph Learning, and building popular graph models like GAT, GCN, and GraphSage. Using these models, a variety of use cases are supported: - Recommender systems - Network Optimization and Analysis - 1D (Temporal) and 2D (Image) Classification - Drug Discovery For use cases and recommendations, refer to the `AMD ROCm blog `__, where you can search for DGL examples and best practices to optimize your workloads on AMD GPUs. * Although multiple use cases of DGL have been tested and verified, a few have been outlined in the `DGL in the Real World: Running GNNs on Real Use Cases `__ blog post, which walks through four real-world graph neural network (GNN) workloads implemented with the Deep Graph Library on ROCm. It covers tasks ranging from heterogeneous e-commerce graphs and multiplex networks (GATNE) to molecular graph regression (GNN-FiLM) and EEG-based neurological diagnosis (EEG-GCNN). For each use case, the authors detail: the dataset and task, how DGL is used, and their experience porting to ROCm. It is shown that DGL codebases often run without modification, with seamless integration of graph operations, message passing, sampling, and convolution. * The `Graph Neural Networks (GNNs) at Scale: DGL with ROCm on AMD Hardware `__ blog post introduces the Deep Graph Library (DGL) and its enablement on the AMD ROCm platform, bringing high-performance graph neural network (GNN) training to AMD GPUs. DGL bridges the gap between dense tensor frameworks and the irregular nature of graph data through a graph-first, message-passing abstraction. Its design ensures scalability, flexibility, and interoperability across frameworks like PyTorch and TensorFlow. AMD’s ROCm integration enables DGL to run efficiently on HIP-based GPUs, supported by prebuilt Docker containers and open-source repositories. This marks a major step in AMD's mission to advance open, scalable AI ecosystems beyond traditional architectures. You can pre-process datasets and begin training on AMD GPUs through: * Single-GPU training/inference * Multi-GPU training .. _dgl-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes `DGL images `__ with ROCm backends on Docker Hub. The following Docker image tags and associated inventories represent the latest available DGL version from the official Docker Hub. Click the |docker-icon| to view the image on Docker Hub. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Docker image - ROCm - DGL - PyTorch - Ubuntu - Python * - .. raw:: html rocm/dgl - `7.0.0 `__ - `2.4.0 `__ - `2.8.0 `__ - 24.04 - `3.12.9 `__ * - .. raw:: html rocm/dgl - `7.0.0 `__ - `2.4.0 `__ - `2.6.0 `__ - 24.04 - `3.12.9 `__ * - .. raw:: html rocm/dgl - `7.0.0 `__ - `2.4.0 `__ - `2.7.1 `__ - 22.04 - `3.10.16 `__ * - .. raw:: html rocm/dgl - `6.4.3 `__ - `2.4.0 `__ - `2.6.0 `__ - 24.04 - `3.12.9 `__ * - .. raw:: html rocm/dgl - `6.4.0 `__ - `2.4.0 `__ - `2.6.0 `__ - 24.04 - `3.12.9 `__ * - .. raw:: html rocm/dgl - `6.4.0 `__ - `2.4.0 `__ - `2.4.1 `__ - 24.04 - `3.12.9 `__ * - .. raw:: html rocm/dgl - `6.4.0 `__ - `2.4.0 `__ - `2.4.1 `__ - 22.04 - `3.10.16 `__ * - .. raw:: html rocm/dgl - `6.4.0 `__ - `2.4.0 `__ - `2.3.0 `__ - 22.04 - `3.10.16 `__ Key ROCm libraries for DGL ================================================================================ DGL on ROCm depends on specific libraries that affect its features and performance. Using the DGL Docker container or building it with the provided Docker file or a ROCm base image is recommended. If you prefer to build it yourself, ensure the following dependencies are installed: .. list-table:: :header-rows: 1 * - ROCm library - ROCm 7.0.0 Version - ROCm 6.4.x Version - Purpose * - `Composable Kernel `_ - 1.1.0 - 1.1.0 - Enables faster execution of core operations like matrix multiplication (GEMM), convolutions and transformations. * - `hipBLAS `_ - 3.0.0 - 2.4.0 - Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for matrix and vector operations. * - `hipBLASLt `_ - 1.0.0 - 0.12.0 - hipBLASLt is an extension of the hipBLAS library, providing additional features like epilogues fused into the matrix multiplication kernel or use of integer tensor cores. * - `hipCUB `_ - 4.0.0 - 3.4.0 - Provides a C++ template library for parallel algorithms for reduction, scan, sort and select. * - `hipFFT `_ - 1.0.20 - 1.0.18 - Provides GPU-accelerated Fast Fourier Transform (FFT) operations. * - `hipRAND `_ - 3.0.0 - 2.12.0 - Provides fast random number generation for GPUs. * - `hipSOLVER `_ - 3.0.0 - 2.4.0 - Provides GPU-accelerated solvers for linear systems, eigenvalues, and singular value decompositions (SVD). * - `hipSPARSE `_ - 4.0.1 - 3.2.0 - Accelerates operations on sparse matrices, such as sparse matrix-vector or matrix-matrix products. * - `hipSPARSELt `_ - 0.2.4 - 0.2.3 - Accelerates operations on sparse matrices, such as sparse matrix-vector or matrix-matrix products. * - `hipTensor `_ - 2.0.0 - 1.5.0 - Optimizes for high-performance tensor operations, such as contractions. * - `MIOpen `_ - 3.5.0 - 3.4.0 - Optimizes deep learning primitives such as convolutions, pooling, normalization, and activation functions. * - `MIGraphX `_ - 2.13.0 - 2.12.0 - Adds graph-level optimizations, ONNX models and mixed precision support and enable Ahead-of-Time (AOT) Compilation. * - `MIVisionX `_ - 3.3.0 - 3.2.0 - Optimizes acceleration for computer vision and AI workloads like preprocessing, augmentation, and inferencing. * - `rocAL `_ - 3.3.0 - 2.2.0 - Accelerates the data pipeline by offloading intensive preprocessing and augmentation tasks. rocAL is part of MIVisionX. * - `RCCL `_ - 2.26.6 - 2.22.3 - Optimizes for multi-GPU communication for operations like AllReduce and Broadcast. * - `rocDecode `_ - 1.0.0 - 0.10.0 - Provides hardware-accelerated data decoding capabilities, particularly for image, video, and other dataset formats. * - `rocJPEG `_ - 1.1.0 - 0.8.0 - Provides hardware-accelerated JPEG image decoding and encoding. * - `RPP `_ - 2.0.0 - 1.9.10 - Speeds up data augmentation, transformation, and other preprocessing steps. * - `rocThrust `_ - 4.0.0 - 3.3.0 - Provides a C++ template library for parallel algorithms like sorting, reduction, and scanning. * - `rocWMMA `_ - 2.0.0 - 1.7.0 - Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix multiplication (GEMM) and accumulation operations with mixed precision support. Supported features ================================================================================ Many functions and methods available upstream are also supported in DGL on ROCm. Instead of listing them all, support is grouped into the following categories to provide a general overview. * DGL Base * DGL Backend * DGL Data * DGL Dataloading * DGL Graph * DGL Function * DGL Ops * DGL Sampling * DGL Transforms * DGL Utils * DGL Distributed * DGL Geometry * DGL Mpops * DGL NN * DGL Optim * DGL Sparse * GraphBolt Unsupported features ================================================================================ * TF32 Support (only supported for PyTorch 2.7 and above) * Kineto/ROCTracer integration Unsupported functions ================================================================================ * ``bfs`` * ``format`` * ``multiprocess_sparse_adam_state_dict`` * ``half_spmm`` * ``segment_mm`` * ``gather_mm_idx_b`` * ``sample_labors_prob`` * ``sample_labors_noprob`` * ``sparse_admin`` Previous versions =============================================================================== See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/dgl-history` to find documentation for previous releases of the ``ROCm/dgl`` Docker image. --- :orphan: .. meta:: :description: FlashInfer compatibility :keywords: GPU, LLM, FlashInfer, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************** FlashInfer compatibility ******************************************************************************** `FlashInfer `__ is a library and kernel generator for Large Language Models (LLMs) that provides a high-performance implementation of graphics processing units (GPUs) kernels. FlashInfer focuses on LLM serving and inference, as well as advanced performance across diverse scenarios. FlashInfer features highly efficient attention kernels, load-balanced scheduling, and memory-optimized techniques, while supporting customized attention variants. It’s compatible with ``torch.compile``, and offers high-performance LLM-specific operators, with easy integration through PyTorch, and C++ APIs. .. note:: The ROCm port of FlashInfer is under active development, and some features are not yet available. For the latest feature compatibility matrix, refer to the ``README`` of the `https://github.com/ROCm/flashinfer `__ repository. Support overview ================================================================================ - The ROCm-supported version of FlashInfer is maintained in the official `https://github.com/ROCm/flashinfer `__ repository, which differs from the `https://github.com/flashinfer-ai/flashinfer `__ upstream repository. - To get started and install FlashInfer on ROCm, use the prebuilt :ref:`Docker images `, which include ROCm, FlashInfer, and all required dependencies. - See the :doc:`ROCm FlashInfer installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- FlashInfer is supported on `ROCm 6.4.1 `__. Supported devices -------------------------------------------------------------------------------- **Officially Supported**: AMD Instinct™ MI300X .. _flashinfer-recommendations: Use cases and recommendations ================================================================================ This release of FlashInfer on ROCm provides the decode functionality for LLM inferencing. In the decode phase, tokens are generated sequentially, with the model predicting each new token based on the previously generated tokens and the input context. FlashInfer on ROCm brings over upstream features such as load balancing, sparse and dense attention optimizations, and batching support, enabling efficient execution on AMD Instinct™ MI300X GPUs. Because large LLMs often require substantial KV caches or long context windows, FlashInfer on ROCm also implements cascade attention from upstream to reduce memory usage. For currently supported use cases and recommendations, refer to the `AMD ROCm blog `__, where you can search for examples and best practices to optimize your workloads on AMD GPUs. .. _flashinfer-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes `FlashInfer images `__ with ROCm backends on Docker Hub. The following Docker image tag and associated inventories represent the latest available FlashInfer version from the official Docker Hub. Click |docker-icon| to view the image on Docker Hub. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Docker image - ROCm - FlashInfer - PyTorch - Ubuntu - Python * - .. raw:: html rocm/flashinfer - `6.4.1 `__ - `v0.2.5 `__ - `2.7.1 `__ - 24.04 - `3.12 `__ --- :orphan: .. meta:: :description: JAX compatibility :keywords: GPU, JAX, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************* JAX compatibility ******************************************************************************* `JAX `__ is a library for array-oriented numerical computation (similar to NumPy), with automatic differentiation and just-in-time (JIT) compilation to enable high-performance machine learning research. JAX provides an API that combines automatic differentiation and the Accelerated Linear Algebra (XLA) compiler to achieve high-performance machine learning at scale. JAX uses composable transformations of Python and NumPy through JIT compilation, automatic vectorization, and parallelization. Support overview ================================================================================ - The ROCm-supported version of JAX is maintained in the official `https://github.com/ROCm/rocm-jax `__ repository, which differs from the `https://github.com/jax-ml/jax `__ upstream repository. - To get started and install JAX on ROCm, use the prebuilt :ref:`Docker images `, which include ROCm, JAX, and all required dependencies. - See the :doc:`ROCm JAX installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- AMD releases official `ROCm JAX Docker images `_ quarterly alongside new ROCm releases. These images undergo full AMD testing. `Community ROCm JAX Docker images `_ follow upstream JAX releases and use the latest available ROCm version. JAX Plugin-PJRT with JAX/JAXLIB compatibility ================================================================================ Portable JIT Runtime (PJRT) is an open, stable interface for device runtime and compiler. The following table details the ROCm version compatibility matrix between JAX Plugin–PJRT and JAX/JAXLIB. .. list-table:: :header-rows: 1 * - JAX Plugin-PJRT - JAX/JAXLIB - ROCm * - 0.7.1 - 0.7.1 - 7.1.1, 7.1.0 * - 0.6.0 - 0.6.2, 0.6.0 - 7.0.2, 7.0.1, 7.0.0 Use cases and recommendations ================================================================================ * The `nanoGPT in JAX `_ blog explores the implementation and training of a Generative Pre-trained Transformer (GPT) model in JAX, inspired by Andrej Karpathy’s JAX-based nanoGPT. Comparing how essential GPT components—such as self-attention mechanisms and optimizers—are realized in JAX and JAX, also highlights JAX’s unique features. * The `Optimize GPT Training: Enabling Mixed Precision Training in JAX using ROCm on AMD GPUs `_ blog post provides a comprehensive guide on enhancing the training efficiency of GPT models by implementing mixed precision techniques in JAX, specifically tailored for AMD GPUs utilizing the ROCm platform. * The `Supercharging JAX with Triton Kernels on AMD GPUs `_ blog demonstrates how to develop a custom fused dropout-activation kernel for matrices using Triton, integrate it with JAX, and benchmark its performance using ROCm. * The `Distributed fine-tuning with JAX on AMD GPUs `_ outlines the process of fine-tuning a Bidirectional Encoder Representations from Transformers (BERT)-based large language model (LLM) using JAX for a text classification task. The blog post discusses techniques for parallelizing the fine-tuning across multiple AMD GPUs and assess the model's performance on a holdout dataset. During the fine-tuning, a BERT-base-cased transformer model and the General Language Understanding Evaluation (GLUE) benchmark dataset was used on a multi-GPU setup. * The `MI300X workload optimization guide `_ provides detailed guidance on optimizing workloads for the AMD Instinct MI300X GPU using ROCm. The page is aimed at helping users achieve optimal performance for deep learning and other high-performance computing tasks on the MI300X GPU. For more use cases and recommendations, see `ROCm JAX blog posts `_. .. _jax-docker-compat: Docker image compatibility ================================================================================ AMD validates and publishes `JAX images `__ with ROCm backends on Docker Hub. For ``jax-community`` images, see `rocm/jax-community `__ on Docker Hub. To find the right image tag, see the :ref:`JAX on ROCm installation documentation ` for a list of available ``rocm/jax`` images. .. _key_rocm_libraries: Key ROCm libraries for JAX ================================================================================ The following ROCm libraries represent potential targets that could be utilized by JAX on ROCm for various computational tasks. The actual libraries used will depend on the specific implementation and operations performed. .. list-table:: :header-rows: 1 * - ROCm library - Version - Purpose * - `hipBLAS `_ - :version-ref:`hipBLAS rocm_version` - Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for matrix and vector operations. * - `hipBLASLt `_ - :version-ref:`hipBLASLt rocm_version` - hipBLASLt is an extension of hipBLAS, providing additional features like epilogues fused into the matrix multiplication kernel or use of integer tensor cores. * - `hipCUB `_ - :version-ref:`hipCUB rocm_version` - Provides a C++ template library for parallel algorithms for reduction, scan, sort and select. * - `hipFFT `_ - :version-ref:`hipFFT rocm_version` - Provides GPU-accelerated Fast Fourier Transform (FFT) operations. * - `hipRAND `_ - :version-ref:`hipRAND rocm_version` - Provides fast random number generation for GPUs. * - `hipSOLVER `_ - :version-ref:`hipSOLVER rocm_version` - Provides GPU-accelerated solvers for linear systems, eigenvalues, and singular value decompositions (SVD). * - `hipSPARSE `_ - :version-ref:`hipSPARSE rocm_version` - Accelerates operations on sparse matrices, such as sparse matrix-vector or matrix-matrix products. * - `hipSPARSELt `_ - :version-ref:`hipSPARSELt rocm_version` - Accelerates operations on sparse matrices, such as sparse matrix-vector or matrix-matrix products. * - `MIOpen `_ - :version-ref:`MIOpen rocm_version` - Optimized for deep learning primitives such as convolutions, pooling, normalization, and activation functions. * - `RCCL `_ - :version-ref:`RCCL rocm_version` - Optimized for multi-GPU communication for operations like all-reduce, broadcast, and scatter. * - `rocThrust `_ - :version-ref:`rocThrust rocm_version` - Provides a C++ template library for parallel algorithms like sorting, reduction, and scanning. .. note:: This table shows ROCm libraries that could potentially be utilized by JAX. Not all libraries may be used in every configuration, and the actual library usage will depend on the specific operations and implementation details. Supported data types and modules =============================================================================== The following tables lists the supported public JAX API data types and modules. Supported data types -------------------------------------------------------------------------------- ROCm supports all the JAX data types of `jax.dtypes `_ module, `jax.numpy.dtype `_ and `default_dtype `_ . The ROCm supported data types in JAX are collected in the following table. .. list-table:: :header-rows: 1 * - Data type - Description * - ``bfloat16`` - 16-bit bfloat (brain floating point). * - ``bool`` - Boolean. * - ``complex128`` - 128-bit complex. * - ``complex64`` - 64-bit complex. * - ``float16`` - 16-bit (half precision) floating-point. * - ``float32`` - 32-bit (single precision) floating-point. * - ``float64`` - 64-bit (double precision) floating-point. * - ``half`` - 16-bit (half precision) floating-point. * - ``int16`` - Signed 16-bit integer. * - ``int32`` - Signed 32-bit integer. * - ``int64`` - Signed 64-bit integer. * - ``int8`` - Signed 8-bit integer. * - ``uint16`` - Unsigned 16-bit (word) integer. * - ``uint32`` - Unsigned 32-bit (dword) integer. * - ``uint64`` - Unsigned 64-bit (qword) integer. * - ``uint8`` - Unsigned 8-bit (byte) integer. .. note:: JAX data type support is affected by the :ref:`key_rocm_libraries` and it's collected on :doc:`ROCm data types and precision support ` page. Supported modules -------------------------------------------------------------------------------- For a complete and up-to-date list of JAX public modules (for example, ``jax.numpy``, ``jax.scipy``, ``jax.lax``), their descriptions, and usage, please refer directly to the `official JAX API documentation `_. .. note:: Since version 0.1.56, JAX has full support for ROCm, and the :ref:`Known issues and important notes ` section contains details about limitations specific to the ROCm backend. The list of JAX API modules are maintained by the JAX project and is subject to change. Refer to the official Jax documentation for the most up-to-date information. Key features and enhancements for ROCm 7.0 =============================================================================== - Upgraded XLA backend: Integrates a newer XLA version, enabling better optimizations, broader operator support, and potential performance gains. - RNN support: Native RNN support (including LSTMs via ``jax.experimental.rnn``) now available on ROCm, aiding sequence model development. - Comprehensive linear algebra capabilities: Offers robust ``jax.linalg`` operations, essential for scientific and machine learning tasks. - Expanded AMD GPU architecture support: Provides ongoing support for gfx1101 GPUs and introduces support for gfx950 and gfx12xx GPUs. - Mixed FP8 precision support: Enables ``lax.dot_general`` operations with mixed FP8 types, offering pathways for memory and compute efficiency. - Streamlined PyPi packaging: Provides reliable PyPi wheels for JAX on ROCm, simplifying the installation process. - Pallas experimental kernel development: Continued Pallas framework enhancements for custom GPU kernels, including new intrinsics (specific kernel behaviors under review). - Improved build system and CI: Enhanced ROCm build system and CI for greater reliability and maintainability. - Enhanced distributed computing setup: Improved JAX setup in multi-GPU distributed environments. .. _jax_comp_known_issues: Known issues and notes for ROCm 7.0 =============================================================================== - ``nn.dot_product_attention``: Certain configurations of ``jax.nn.dot_product_attention`` may cause segmentation faults, though the majority of use cases work correctly. - SVD with dynamic shapes: SVD on inputs with dynamic/symbolic shapes might result in an error. SVD with static shapes is unaffected. - QR decomposition with symbolic shapes: QR decomposition operations may fail when using symbolic/dynamic shapes in shape polymorphic contexts. - Pallas kernels: Specific advanced Pallas kernels may exhibit variations in numerical output or resource usage. These are actively reviewed as part of Pallas's experimental development. --- :orphan: .. meta:: :description: llama.cpp compatibility :keywords: GPU, GGML, llama.cpp, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************** llama.cpp compatibility ******************************************************************************** `llama.cpp `__ is an open-source framework for Large Language Model (LLM) inference that runs on both central processing units (CPUs) and graphics processing units (GPUs). It is written in plain C/C++, providing a simple, dependency-free setup. The framework supports multiple quantization options, from 1.5-bit to 8-bit integers, to accelerate inference and reduce memory usage. Originally built as a CPU-first library, llama.cpp is easy to integrate with other programming environments and is widely adopted across diverse platforms, including consumer devices. Support overview ================================================================================ - The ROCm-supported version of llama.cpp is maintained in the official `https://github.com/ROCm/llama.cpp `__ repository, which differs from the `https://github.com/ggml-org/llama.cpp `__ upstream repository. - To get started and install llama.cpp on ROCm, use the prebuilt :ref:`Docker images `, which include ROCm, llama.cpp, and all required dependencies. - See the :doc:`ROCm llama.cpp installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- llama.cpp is supported on `ROCm 7.0.0 `__ and `ROCm 6.4.x `__. Supported devices -------------------------------------------------------------------------------- **Officially Supported**: AMD Instinct™ MI325X, MI300X, MI210 Use cases and recommendations ================================================================================ llama.cpp can be applied in a variety of scenarios, particularly when you need to meet one or more of the following requirements: - Plain C/C++ implementation with no external dependencies - Support for 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory usage - Custom HIP (Heterogeneous-compute Interface for Portability) kernels for running large language models (LLMs) on AMD GPUs (graphics processing units) - CPU (central processing unit) + GPU (graphics processing unit) hybrid inference for partially accelerating models larger than the total available VRAM (video random-access memory) llama.cpp is also used in a range of real-world applications, including: - Games such as `Lucy's Labyrinth `__: A simple maze game where AI-controlled agents attempt to trick the player. - Tools such as `Styled Lines `__: A proprietary, asynchronous inference wrapper for Unity3D game development, including pre-built mobile and web platform wrappers and a model example. - Various other AI applications use llama.cpp as their inference engine; for a detailed list, see the `user interfaces (UIs) section `__. For more use cases and recommendations, refer to the `AMD ROCm blog `__, where you can search for llama.cpp examples and best practices to optimize your workloads on AMD GPUs. - The `Llama.cpp Meets Instinct: A New Era of Open-Source AI Acceleration `__ blog post outlines how the open-source llama.cpp framework enables efficient LLM inference—including interactive inference with ``llama-cli``, server deployment with ``llama-server``, GGUF model preparation and quantization, performance benchmarking, and optimizations tailored for AMD Instinct GPUs within the ROCm ecosystem. .. _llama-cpp-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes `llama.cpp images `__ with ROCm backends on Docker Hub. The following Docker image tags and associated inventories represent the latest available llama.cpp versions from the official Docker Hub. Click |docker-icon| to view the image on Docker Hub. .. important:: Tag endings of ``_full``, ``_server``, and ``_light`` serve different purposes for entrypoints as follows: - Full: This image includes both the main executable file and the tools to convert ``LLaMA`` models into ``ggml`` and convert into 4-bit quantization. - Server: This image only includes the server executable file. - Light: This image only includes the main executable file. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Full Docker - Server Docker - Light Docker - llama.cpp - ROCm - Ubuntu * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6652 `__ - `7.0.0 `__ - 24.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6652 `__ - `7.0.0 `__ - 22.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6356 `__ - `6.4.3 `__ - 24.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6356 `__ - `6.4.3 `__ - 22.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6356 `__ - `6.4.2 `__ - 24.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6356 `__ - `6.4.2 `__ - 22.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6356 `__ - `6.4.1 `__ - 24.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b6356 `__ - `6.4.1 `__ - 22.04 * - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - .. raw:: html rocm/llama.cpp - `b5997 `__ - `6.4.0 `__ - 24.04 Key ROCm libraries for llama.cpp ================================================================================ llama.cpp functionality on ROCm is determined by its underlying library dependencies. These ROCm components affect the capabilities, performance, and feature set available to developers. Ensure you have the required libraries for your corresponding ROCm version. .. list-table:: :header-rows: 1 * - ROCm library - ROCm 7.0.0 version - ROCm 6.4.x version - Purpose - Usage * - `hipBLAS `__ - 3.0.0 - 2.4.0 - Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for matrix and vector operations. - Supports operations such as matrix multiplication, matrix-vector products, and tensor contractions. Utilized in both dense and batched linear algebra operations. * - `hipBLASLt `__ - 1.0.0 - 0.12.0 - hipBLASLt is an extension of the hipBLAS library, providing additional features like epilogues fused into the matrix multiplication kernel or use of integer tensor cores. - By setting the flag ``ROCBLAS_USE_HIPBLASLT``, you can dispatch hipblasLt kernels where possible. * - `rocWMMA `__ - 2.0.0 - 1.7.0 - Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix multiplication (GEMM) and accumulation operations with mixed precision support. - Can be used to enhance the flash attention performance on AMD compute, by enabling the flag during compile time. Previous versions =============================================================================== See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/llama-cpp-history` to find documentation for previous releases of the ``ROCm/llama.cpp`` Docker image. --- :orphan: .. meta:: :description: Megablocks compatibility :keywords: GPU, megablocks, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************** Megablocks compatibility ******************************************************************************** `Megablocks `__ is a lightweight library for mixture-of-experts `(MoE) `__ training. The core of the system is efficient "dropless-MoE" and standard MoE layers. Megablocks is integrated with `https://github.com/stanford-futuredata/Megatron-LM `__, where data and pipeline parallel training of MoEs is supported. Support overview ================================================================================ - The ROCm-supported version of Megablocks is maintained in the official `https://github.com/ROCm/megablocks `__ repository, which differs from the `https://github.com/stanford-futuredata/Megatron-LM `__ upstream repository. - To get started and install Megablocks on ROCm, use the prebuilt :ref:`Docker image `, which includes ROCm, Megablocks, and all required dependencies. - See the :doc:`ROCm Megablocks installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- Megablocks is supported on `ROCm 6.3.0 `__. Supported devices -------------------------------------------------------------------------------- - **Officially Supported**: AMD Instinct™ MI300X - **Partially Supported** (functionality or performance limitations): AMD Instinct™ MI250X, MI210 Supported models and features -------------------------------------------------------------------------------- This section summarizes the Megablocks features supported by ROCm. * Distributed Pre-training * Activation Checkpointing and Recomputation * Distributed Optimizer * Mixture-of-Experts * dropless-Mixture-of-Experts .. _megablocks-recommendations: Use cases and recommendations ================================================================================ * The `Efficient MoE training on AMD ROCm: How-to use Megablocks on AMD GPUs `__ blog post guides how to leverage the ROCm platform for pre-training using the Megablocks framework. It introduces a streamlined approach for training Mixture-of-Experts (MoE) models using the Megablocks library on AMD hardware. Focusing on GPT-2, it demonstrates how block-sparse computations can enhance scalability and efficiency in MoE training. The guide provides step-by-step instructions for setting up the environment, including cloning the repository, building the Docker image, and running the training container. Additionally, it offers insights into utilizing the ``oscar-1GB.json`` dataset for pre-training language models. By leveraging Megablocks and the ROCm platform, you can optimize your MoE training workflows for large-scale transformer models. It features how to pre-process datasets and how to begin pre-training on AMD GPUs through: * Single-GPU pre-training * Multi-GPU pre-training .. _megablocks-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes `Megablocks images `__ with ROCm backends on Docker Hub. The following Docker image tag and associated inventories represent the latest available Megablocks version from the official Docker Hub. Click |docker-icon| to view the image on Docker Hub. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Docker image - ROCm - Megablocks - PyTorch - Ubuntu - Python * - .. raw:: html rocm/megablocks - `6.3.0 `_ - `0.7.0 `_ - `2.4.0 `_ - 24.04 - `3.12.9 `_ --- :orphan: .. meta:: :description: PyTorch compatibility :keywords: GPU, PyTorch, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************** PyTorch compatibility ******************************************************************************** `PyTorch `__ is an open-source tensor library designed for deep learning. PyTorch on ROCm provides mixed-precision and large-scale training using `MIOpen `__ and `RCCL `__ libraries. PyTorch provides two high-level features: - Tensor computation (like NumPy) with strong GPU acceleration - Deep neural networks built on a tape-based autograd system (rapid computation of multiple partial derivatives or gradients) Support overview ================================================================================ ROCm support for PyTorch is upstreamed into the official PyTorch repository. ROCm development is aligned with the stable release of PyTorch, while upstream PyTorch testing uses the stable release of ROCm to maintain consistency: - The ROCm-supported version of PyTorch is maintained in the official `https://github.com/ROCm/pytorch `__ repository, which differs from the `https://github.com/pytorch/pytorch `__ upstream repository. - To get started and install PyTorch on ROCm, use the prebuilt :ref:`Docker images `, which include ROCm, PyTorch, and all required dependencies. - See the :doc:`ROCm PyTorch installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ or `Previous versions `__ for additional context. PyTorch includes tooling that generates HIP source code from the CUDA backend. This approach allows PyTorch to support ROCm without requiring manual code modifications. For more information, see :doc:`HIPIFY `. Version support -------------------------------------------------------------------------------- AMD releases official `ROCm PyTorch Docker images `_ quarterly alongside new ROCm releases. These images undergo full AMD testing. .. _pytorch-recommendations: Use cases and recommendations ================================================================================ * :doc:`Using ROCm for AI: training a model ` guides how to leverage the ROCm platform for training AI models. It covers the steps, tools, and best practices for optimizing training workflows on AMD GPUs using PyTorch features. * :doc:`Single-GPU fine-tuning and inference ` describes and demonstrates how to use the ROCm platform for the fine-tuning and inference of machine learning models, particularly large language models (LLMs), on systems with a single GPU. This topic provides a detailed guide for setting up, optimizing, and executing fine-tuning and inference workflows in such environments. * :doc:`Multi-GPU fine-tuning and inference optimization ` describes and demonstrates the fine-tuning and inference of machine learning models on systems with multiple GPUs. * The :doc:`Instinct MI300X workload optimization guide ` provides detailed guidance on optimizing workloads for the AMD Instinct MI300X GPU using ROCm. This guide helps users achieve optimal performance for deep learning and other high-performance computing tasks on the MI300X GPU. * The :doc:`Inception with PyTorch documentation ` describes how PyTorch integrates with ROCm for AI workloads. It outlines the use of PyTorch on the ROCm platform and focuses on efficiently leveraging AMD GPU hardware for training and inference tasks in AI applications. For more use cases and recommendations, see `ROCm PyTorch blog posts `__. .. _pytorch-docker-compat: Docker image compatibility ================================================================================ AMD validates and publishes `PyTorch images `__ with ROCm backends on Docker Hub. To find the right image tag, see the :ref:`PyTorch on ROCm installation documentation ` for a list of available ``rocm/pytorch`` images. Key ROCm libraries for PyTorch ================================================================================ PyTorch functionality on ROCm is determined by its underlying library dependencies. These ROCm components affect the capabilities, performance, and feature set available to developers. .. list-table:: :header-rows: 1 * - ROCm library - Version - Purpose - Used in * - `Composable Kernel `__ - :version-ref:`"Composable Kernel" rocm_version` - Enables faster execution of core operations like matrix multiplication (GEMM), convolutions and transformations. - Speeds up ``torch.permute``, ``torch.view``, ``torch.matmul``, ``torch.mm``, ``torch.bmm``, ``torch.nn.Conv2d``, ``torch.nn.Conv3d`` and ``torch.nn.MultiheadAttention``. * - `hipBLAS `__ - :version-ref:`hipBLAS rocm_version` - Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for matrix and vector operations. - Supports operations such as matrix multiplication, matrix-vector products, and tensor contractions. Utilized in both dense and batched linear algebra operations. * - `hipBLASLt `__ - :version-ref:`hipBLASLt rocm_version` - hipBLASLt is an extension of the hipBLAS library, providing additional features like epilogues fused into the matrix multiplication kernel or use of integer tensor cores. - Accelerates operations such as ``torch.matmul``, ``torch.mm``, and the matrix multiplications used in convolutional and linear layers. * - `hipCUB `__ - :version-ref:`hipCUB rocm_version` - Provides a C++ template library for parallel algorithms for reduction, scan, sort and select. - Supports operations such as ``torch.sum``, ``torch.cumsum``, ``torch.sort`` irregular shapes often involve scanning, sorting, and filtering, which hipCUB handles efficiently. * - `hipFFT `__ - :version-ref:`hipFFT rocm_version` - Provides GPU-accelerated Fast Fourier Transform (FFT) operations. - Used in functions like the ``torch.fft`` module. * - `hipRAND `__ - :version-ref:`hipRAND rocm_version` - Provides fast random number generation for GPUs. - The ``torch.rand``, ``torch.randn``, and stochastic layers like ``torch.nn.Dropout`` rely on hipRAND. * - `hipSOLVER `__ - :version-ref:`hipSOLVER rocm_version` - Provides GPU-accelerated solvers for linear systems, eigenvalues, and singular value decompositions (SVD). - Supports functions like ``torch.linalg.solve``, ``torch.linalg.eig``, and ``torch.linalg.svd``. * - `hipSPARSE `__ - :version-ref:`hipSPARSE rocm_version` - Accelerates operations on sparse matrices, such as sparse matrix-vector or matrix-matrix products. - Sparse tensor operations ``torch.sparse``. * - `hipSPARSELt `__ - :version-ref:`hipSPARSELt rocm_version` - Accelerates operations on sparse matrices, such as sparse matrix-vector or matrix-matrix products. - Sparse tensor operations ``torch.sparse``. * - `hipTensor `__ - :version-ref:`hipTensor rocm_version` - Optimizes for high-performance tensor operations, such as contractions. - Accelerates tensor algebra, especially in deep learning and scientific computing. * - `MIOpen `__ - :version-ref:`MIOpen rocm_version` - Optimizes deep learning primitives such as convolutions, pooling, normalization, and activation functions. - Speeds up convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other layers. Used in operations like ``torch.nn.Conv2d``, ``torch.nn.ReLU``, and ``torch.nn.LSTM``. * - `MIGraphX `__ - :version-ref:`MIGraphX rocm_version` - Adds graph-level optimizations, ONNX models and mixed precision support and enable Ahead-of-Time (AOT) Compilation. - Speeds up inference models and executes ONNX models for compatibility with other frameworks. ``torch.nn.Conv2d``, ``torch.nn.ReLU``, and ``torch.nn.LSTM``. * - `MIVisionX `__ - :version-ref:`MIVisionX rocm_version` - Optimizes acceleration for computer vision and AI workloads like preprocessing, augmentation, and inferencing. - Faster data preprocessing and augmentation pipelines for datasets like ImageNet or COCO and easy to integrate into PyTorch's ``torch.utils.data`` and ``torchvision`` workflows. * - `rocAL `__ - :version-ref:`rocAL rocm_version` - Accelerates the data pipeline by offloading intensive preprocessing and augmentation tasks. rocAL is part of MIVisionX. - Easy to integrate into PyTorch's ``torch.utils.data`` and ``torchvision`` data load workloads. * - `RCCL `__ - :version-ref:`RCCL rocm_version` - Optimizes for multi-GPU communication for operations like AllReduce and Broadcast. - Distributed data parallel training (``torch.nn.parallel.DistributedDataParallel``). Handles communication in multi-GPU setups. * - `rocDecode `__ - :version-ref:`rocDecode rocm_version` - Provides hardware-accelerated data decoding capabilities, particularly for image, video, and other dataset formats. - Can be integrated in ``torch.utils.data``, ``torchvision.transforms`` and ``torch.distributed``. * - `rocJPEG `__ - :version-ref:`rocJPEG rocm_version` - Provides hardware-accelerated JPEG image decoding and encoding. - GPU accelerated ``torchvision.io.decode_jpeg`` and ``torchvision.io.encode_jpeg`` and can be integrated in ``torch.utils.data`` and ``torchvision``. * - `RPP `__ - :version-ref:`RPP rocm_version` - Speeds up data augmentation, transformation, and other preprocessing steps. - Easy to integrate into PyTorch's ``torch.utils.data`` and ``torchvision`` data load workloads to speed up data processing. * - `rocThrust `__ - :version-ref:`rocThrust rocm_version` - Provides a C++ template library for parallel algorithms like sorting, reduction, and scanning. - Utilized in backend operations for tensor computations requiring parallel processing. * - `rocWMMA `__ - :version-ref:`rocWMMA rocm_version` - Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix multiplication (GEMM) and accumulation operations with mixed precision support. - Linear layers (``torch.nn.Linear``), convolutional layers (``torch.nn.Conv2d``), attention layers, general tensor operations that involve matrix products, such as ``torch.matmul``, ``torch.bmm``, and more. Supported modules and data types ================================================================================ The following section outlines the supported data types, modules, and domain libraries available in PyTorch on ROCm. Supported data types -------------------------------------------------------------------------------- The tensor data type is specified using the ``dtype`` attribute or argument. PyTorch supports many data types for different use cases. The following table lists `torch.Tensor `__ single data types: .. list-table:: :header-rows: 1 * - Data type - Description * - ``torch.float8_e4m3fn`` - 8-bit floating point, e4m3 * - ``torch.float8_e5m2`` - 8-bit floating point, e5m2 * - ``torch.float16`` or ``torch.half`` - 16-bit floating point * - ``torch.bfloat16`` - 16-bit floating point * - ``torch.float32`` or ``torch.float`` - 32-bit floating point * - ``torch.float64`` or ``torch.double`` - 64-bit floating point * - ``torch.complex32`` or ``torch.chalf`` - 32-bit complex numbers * - ``torch.complex64`` or ``torch.cfloat`` - 64-bit complex numbers * - ``torch.complex128`` or ``torch.cdouble`` - 128-bit complex numbers * - ``torch.uint8`` - 8-bit integer (unsigned) * - ``torch.uint16`` - 16-bit integer (unsigned); Not natively supported in ROCm * - ``torch.uint32`` - 32-bit integer (unsigned); Not natively supported in ROCm * - ``torch.uint64`` - 64-bit integer (unsigned); Not natively supported in ROCm * - ``torch.int8`` - 8-bit integer (signed) * - ``torch.int16`` or ``torch.short`` - 16-bit integer (signed) * - ``torch.int32`` or ``torch.int`` - 32-bit integer (signed) * - ``torch.int64`` or ``torch.long`` - 64-bit integer (signed) * - ``torch.bool`` - Boolean * - ``torch.quint8`` - Quantized 8-bit integer (unsigned) * - ``torch.qint8`` - Quantized 8-bit integer (signed) * - ``torch.qint32`` - Quantized 32-bit integer (signed) * - ``torch.quint4x2`` - Quantized 4-bit integer (unsigned) .. note:: Unsigned types, except ``uint8``, have limited support in eager mode. They primarily exist to assist usage with ``torch.compile``. See :doc:`ROCm precision support ` for the native hardware support of data types. Supported modules -------------------------------------------------------------------------------- For a complete and up-to-date list of PyTorch core modules (for example., ``torch``, ``torch.nn``, ``torch.cuda``, ``torch.backends.cuda`` and ``torch.backends.cudnn``), their descriptions, and usage, please refer directly to the `official PyTorch documentation `_. Core PyTorch functionality on ROCm includes tensor operations, neural network layers, automatic differentiation, distributed training, mixed-precision training, compilation features, and domain-specific libraries for audio, vision, text processing, and more. Supported domain libraries -------------------------------------------------------------------------------- PyTorch offers specialized `domain libraries `_ with GPU acceleration that build on its core features to support specific application areas. The table below lists the PyTorch domain libraries that are compatible with ROCm. .. list-table:: :header-rows: 1 * - Library - Description * - `torchaudio `_ - Audio and signal processing library for PyTorch. Provides utilities for audio I/O, signal and data processing functions, datasets, model implementations, and application components for audio and speech processing tasks. **Note:** To ensure GPU-acceleration with ``torchaudio.transforms``, you need to explicitly move audio data (waveform tensor) to GPU using ``.to('cuda')``. * - `torchtune `_ - PyTorch-native library designed for fine-tuning large language models (LLMs). Provides supports the full fine-tuning workflow and offers compatibility with popular production inference systems. **Note:** Only official release exists. * - `torchvision `_ - Computer vision library that is part of the PyTorch project. Provides popular datasets, model architectures, and common image transformations for computer vision applications. * - `torchdata `_ - Beta library of common modular data loading primitives for easily constructing flexible and performant data pipelines, with features still in prototype stage. * - `torchrec `_ - PyTorch domain library for common sparsity and parallelism primitives needed for large-scale recommender systems, enabling authors to train models with large embedding tables shared across many GPUs. **Note:** ``torchrec`` does not implement ROCm-specific kernels. ROCm acceleration is provided through the underlying PyTorch framework and ROCm library integration. * - `torchserve `_ - Performant, flexible and easy-to-use tool for serving PyTorch models in production, providing features for model management, batch processing, and scalable deployment. **Note:** `torchserve `_ is no longer actively maintained. Last official release is sent out with PyTorch 2.4. * - `torchrl `_ - Open-source, Python-first Reinforcement Learning library for PyTorch with a focus on high modularity and good runtime performance, providing low and high-level RL abstractions and reusable functionals for cost functions, returns, and data processing. **Note:** Only official release exists. * - `tensordict `_ - Dictionary-like class that simplifies operations on batches of tensors, enhancing code readability, compactness, and modularity by abstracting tailored operations and reducing errors through automatic operation dispatching. **Note:** Only official release exists. Key features and enhancements for PyTorch 2.9 with ROCm 7.1.1 ================================================================================ - Scaled Dot Product Attention (SDPA) upgraded to use AOTriton version 0.11b. - Default hipBLASLt support enabled for gfx908 architecture on ROCm 6.3 and later. - MIOpen now supports channels last memory format for 3D convolutions and batch normalization. - NHWC convolution operations in MIOpen optimized by eliminating unnecessary transpose operations. - Improved tensor.item() performance by removing redundant synchronization. - Enhanced performance for element-wise operations and reduction kernels. - Added support for grouped GEMM operations through fbgemm_gpu generative AI components. - Resolved device error in Inductor when using CUDA graph trees with HIP. - Corrected logsumexp scaling in AOTriton-based SDPA implementation. - Added stream graph capture status validation in memory copy synchronization functions. Key features and enhancements for PyTorch 2.8 with ROCm 7.1 ================================================================================ - MIOpen deep learning optimizations: Further optimized NHWC BatchNorm feature. - Added float8 support for the DeepSpeed extension, allowing for decreased memory footprint and increased throughput in training and inference workloads. - ``torch.nn.functional.scaled_dot_product_attention`` now calling optimized flash attention kernel automatically. Key features and enhancements for PyTorch 2.7/2.8 with ROCm 7.0 ================================================================================ - Enhanced TunableOp framework: Introduces ``tensorfloat32`` support for TunableOp operations, improved offline tuning for ScaledGEMM operations, submatrix offline tuning capabilities, and better logging for BLAS operations without bias vectors. - Expanded GPU architecture support: Provides optimized support for newer GPU architectures, including gfx1200 and gfx1201 with preferred hipBLASLt backend selection, along with improvements for gfx950 and gfx1100 Series GPUs. - Advanced Triton Integration: AOTriton 0.10b introduces official support for gfx950 and gfx1201, along with experimental support for gfx1101, gfx1151, gfx1150, and gfx1200. - Improved element-wise kernel performance: Delivers enhanced vectorized element-wise kernels with better support for heterogeneous tensor types and optimized input vectorization for tensors with mixed data types. - MIOpen deep learning optimizations: Enables NHWC BatchNorm by default on ROCm 7.0+, provides ``maxpool`` forward and backward performance improvements targeting ResNet scenarios, and includes updated launch configurations for better performance. - Enhanced memory and tensor operations: Features fixes for in-place ``aten`` sum operations with specialized templated kernels, improved 3D tensor performance with NHWC format, and better handling of memory-bound matrix multiplication operations. - Robust testing and quality improvements: Includes comprehensive test suite updates with improved tolerance handling for Navi3x architectures, generalized ROCm-specific test conditions, and enhanced unit test coverage for Flash Attention and Memory Efficient operations. - Composable Kernel (CK) updates: Features updated CK submodule integration with the latest optimizations and performance improvements for core mathematical operations. - Development and debugging enhancements: Includes improved source handling for dynamic compilation, better error handling for atomic operations, and enhanced state checking for trace operations. - Integrate APEX fused layer normalization, which can have positive impact on text-to-video models. - Integrate APEX distributed fused LAMB and distributed fused ADAM, which can have positive impact on BERT-L and Llama2-SFT. - FlashAttention v3 has been integrated for AMD GPUs. - `Pytorch C++ extensions `_ provide a mechanism for compiling custom operations that can be used during network training or inference. For AMD platforms, ``amdclang++`` has been validated as the supported compiler for building these extensions. Known issues and notes for PyTorch 2.7/2.8 with ROCm 7.0 and ROCm 7.1 ================================================================================ - The ``matmul.allow_fp16_reduced_precision_reduction`` and ``matmul.allow_bf16_reduced_precision_reduction`` options under ``torch.backends.cuda`` are not supported. As a result, reduced-precision reductions using FP16 or BF16 accumulation types are not available. --- :orphan: .. meta:: :description: Ray compatibility :keywords: GPU, Ray, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************* Ray compatibility ******************************************************************************* Ray is a unified framework for scaling AI and Python applications from your laptop to a full cluster, without changing your code. Ray consists of `a core distributed runtime `_ and a set of `AI libraries `_ for simplifying machine learning computations. Ray is a general-purpose framework that runs many types of workloads efficiently. Any Python application can be scaled with Ray, without extra infrastructure. Support overview ================================================================================ - The ROCm-supported version of Ray is maintained in the official `https://github.com/ROCm/ray `__ repository, which differs from the `https://github.com/ray-project/ray `__ upstream repository. - To get started and install Ray on ROCm, use the prebuilt :ref:`Docker image `, which includes ROCm, Ray, and all required dependencies. - The Docker image provided is based on the upstream Ray `Daily Release (Nightly) wheels `__ corresponding to commit `005c372 `__. - See the :doc:`ROCm Ray installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- Ray is supported on `ROCm 6.4.1 `__. Supported devices -------------------------------------------------------------------------------- **Officially Supported**: AMD Instinct™ MI300X, MI210 Use cases and recommendations ================================================================================ * The `Reinforcement Learning from Human Feedback on AMD GPUs with verl and ROCm Integration `__ blog provides an overview of Volcano Engine Reinforcement Learning (verl) for large language models (LLMs) and discusses its benefits in large-scale reinforcement learning from human feedback (RLHF). It uses Ray as part of a hybrid orchestration engine to schedule and coordinate training and inference tasks in parallel, enabling optimized resource utilization and potential overlap between these phases. This dynamic resource allocation strategy significantly improves overall system efficiency. The blog presents verl’s performance results, focusing on throughput and convergence accuracy achieved on AMD Instinct™ MI300X GPUs. Follow this guide to get started with verl on AMD Instinct GPUs and accelerate your RLHF training with ROCm-optimized performance. * The `Exploring Use Cases for Scalable AI: Implementing Ray with ROCm Support for Efficient ML Workflows `__ blog post describes key use cases such as training and inference for large language models (LLMs), model serving, hyperparameter tuning, reinforcement learning, and the orchestration of large-scale workloads using Ray in the ROCm environment. For more use cases and recommendations, see the AMD GPU tabs in the `Accelerator Support topic `__ of the Ray core documentation and refer to the `AMD ROCm blog `__, where you can search for Ray examples and best practices to optimize your workloads on AMD GPUs. .. _ray-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes ready-made `ROCm Ray Docker images `__ with ROCm backends on Docker Hub. The following Docker image tags and associated inventories represent the latest Ray version from the official Docker Hub. Click the |docker-icon| icon to view the image on Docker Hub. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Docker image - ROCm - Ray - Pytorch - Ubuntu - Python * - .. raw:: html rocm/ray - `6.4.1 `__. - `2.48.0.post0 `_ - 2.6.0+git684f6f2 - 24.04 - `3.12.10 `_ --- :orphan: .. meta:: :description: Stanford Megatron-LM compatibility :keywords: Stanford, Megatron-LM, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************** Stanford Megatron-LM compatibility ******************************************************************************** Stanford Megatron-LM is a large-scale language model training framework developed by NVIDIA at `https://github.com/NVIDIA/Megatron-LM `_. It is designed to train massive transformer-based language models efficiently by model and data parallelism. It provides efficient tensor, pipeline, and sequence-based model parallelism for pre-training transformer-based language models such as GPT (Decoder Only), BERT (Encoder Only), and T5 (Encoder-Decoder). Support overview ================================================================================ - The ROCm-supported version of Stanford Megatron-LM is maintained in the official `https://github.com/ROCm/Stanford-Megatron-LM `__ repository, which differs from the `https://github.com/stanford-futuredata/Megatron-LM `__ upstream repository. - To get started and install Stanford Megatron-LM on ROCm, use the prebuilt :ref:`Docker image `, which includes ROCm, Stanford Megatron-LM, and all required dependencies. - See the :doc:`ROCm Stanford Megatron-LM installation guide ` for installation and setup instructions. - You can also consult the upstream `Installation guide `__ for additional context. Version support -------------------------------------------------------------------------------- Stanford Megatron-LM is supported on `ROCm 6.3.0 `__. Supported devices -------------------------------------------------------------------------------- - **Officially Supported**: AMD Instinct™ MI300X - **Partially Supported** (functionality or performance limitations): AMD Instinct™ MI250X, MI210 Supported models and features -------------------------------------------------------------------------------- This section details models & features that are supported by the ROCm version on Stanford Megatron-LM. Models: * BERT * GPT * T5 * ICT Features: * Distributed Pre-training * Activation Checkpointing and Recomputation * Distributed Optimizer * Mixture-of-Experts .. _megatron-lm-recommendations: Use cases and recommendations ================================================================================ The following blog post mentions Megablocks, but you can run Stanford Megatron-LM with the same steps to pre-process datasets on AMD GPUs: * The `Efficient MoE training on AMD ROCm: How-to use Megablocks on AMD GPUs `__ blog post guides how to leverage the ROCm platform for pre-training using the Megablocks framework. It introduces a streamlined approach for training Mixture-of-Experts (MoE) models using the Megablocks library on AMD hardware. Focusing on GPT-2, it demonstrates how block-sparse computations can enhance scalability and efficiency in MoE training. The guide provides step-by-step instructions for setting up the environment, including cloning the repository, building the Docker image, and running the training container. Additionally, it offers insights into utilizing the ``oscar-1GB.json`` dataset for pre-training language models. By leveraging Megablocks and the ROCm platform, you can optimize your MoE training workflows for large-scale transformer models. It features how to pre-process datasets and how to begin pre-training on AMD GPUs through: * Single-GPU pre-training * Multi-GPU pre-training .. _megatron-lm-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes `Stanford Megatron-LM images `_ with ROCm and Pytorch backends on Docker Hub. The following Docker image tags and associated inventories represent the latest Stanford Megatron-LM version from the official Docker Hub. Click |docker-icon| to view the image on Docker Hub. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Docker image - ROCm - Stanford Megatron-LM - PyTorch - Ubuntu - Python * - .. raw:: html - `6.3.0 `_ - `85f95ae `_ - `2.4.0 `_ - 24.04 - `3.12.9 `_ --- :orphan: .. meta:: :description: TensorFlow compatibility :keywords: GPU, TensorFlow, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************* TensorFlow compatibility ******************************************************************************* `TensorFlow `__ is an open-source library for solving machine learning, deep learning, and AI problems. It can solve many problems across different sectors and industries, but primarily focuses on neural network training and inference. It is one of the most popular deep learning frameworks and is very active in open-source development. Support overview ================================================================================ - The ROCm-supported version of TensorFlow is maintained in the official `https://github.com/ROCm/tensorflow-upstream `__ repository, which differs from the `https://github.com/tensorflow/tensorflow `__ upstream repository. - To get started and install TensorFlow on ROCm, use the prebuilt :ref:`Docker images `, which include ROCm, TensorFlow, and all required dependencies. - See the :doc:`ROCm TensorFlow installation guide ` for installation and setup instructions. - You can also consult the `TensorFlow API versions `__ list for additional context. Version support -------------------------------------------------------------------------------- The `official TensorFlow repository `__ includes full ROCm support. AMD maintains a TensorFlow `ROCm repository `__ in order to quickly add bug fixes, updates, and support for the latest ROCm versions. .. _tensorflow-docker-compat: Docker image compatibility ================================================================================ AMD provides preconfigured Docker images with TensorFlow and the ROCm backend. These images are published on `Docker Hub `__ and are the recommended way to get started with deep learning with TensorFlow on ROCm. To find the right image tag, see the :ref:`TensorFlow on ROCm installation documentation ` for a list of available ``rocm/tensorflow`` images. Critical ROCm libraries for TensorFlow =============================================================================== TensorFlow depends on multiple components and the supported features of those components can affect the TensorFlow ROCm supported feature set. The versions in the following table refer to the first TensorFlow version where the ROCm library was introduced as a dependency. The versions described are available in ROCm :version:`rocm_version`. .. list-table:: :widths: 25, 10, 35, 30 :header-rows: 1 * - ROCm library - Version - Purpose - Used in * - `hipBLAS `__ - :version-ref:`hipBLAS rocm_version` - Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for matrix and vector operations. - Accelerates operations like ``tf.matmul``, ``tf.linalg.matmul``, and other matrix multiplications commonly used in neural network layers. * - `hipBLASLt `__ - :version-ref:`hipBLASLt rocm_version` - Extends hipBLAS with additional optimizations like fused kernels and integer tensor cores. - Optimizes matrix multiplications and linear algebra operations used in layers like dense, convolutional, and RNNs in TensorFlow. * - `hipCUB `__ - :version-ref:`hipCUB rocm_version` - Provides a C++ template library for parallel algorithms for reduction, scan, sort and select. - Supports operations like ``tf.reduce_sum``, ``tf.cumsum``, ``tf.sort`` and other tensor operations in TensorFlow, especially those involving scanning, sorting, and filtering. * - `hipFFT `__ - :version-ref:`hipFFT rocm_version` - Accelerates Fast Fourier Transforms (FFT) for signal processing tasks. - Used for operations like signal processing, image filtering, and certain types of neural networks requiring FFT-based transformations. * - `hipSOLVER `__ - :version-ref:`hipSOLVER rocm_version` - Provides GPU-accelerated direct linear solvers for dense and sparse systems. - Optimizes linear algebra functions such as solving systems of linear equations, often used in optimization and training tasks. * - `hipSPARSE `__ - :version-ref:`hipSPARSE rocm_version` - Optimizes sparse matrix operations for efficient computations on sparse data. - Accelerates sparse matrix operations in models with sparse weight matrices or activations, commonly used in neural networks. * - `MIOpen `__ - :version-ref:`MIOpen rocm_version` - Provides optimized deep learning primitives such as convolutions, pooling, normalization, and activation functions. - Speeds up convolutional neural networks (CNNs) and other layers. Used in TensorFlow for layers like ``tf.nn.conv2d``, ``tf.nn.relu``, and ``tf.nn.lstm_cell``. * - `RCCL `__ - :version-ref:`RCCL rocm_version` - Optimizes for multi-GPU communication for operations like AllReduce and Broadcast. - Distributed data parallel training (``tf.distribute.MirroredStrategy``). Handles communication in multi-GPU setups. * - `rocThrust `__ - :version-ref:`rocThrust rocm_version` - Provides a C++ template library for parallel algorithms like sorting, reduction, and scanning. - Reduction operations like ``tf.reduce_sum``, ``tf.cumsum`` for computing the cumulative sum of elements along a given axis or ``tf.unique`` to finds unique elements in a tensor can use rocThrust. Supported and unsupported features =============================================================================== The following section maps supported data types and GPU-accelerated TensorFlow features to their minimum supported ROCm and TensorFlow versions. Data types --------------- The data type of a tensor is specified using the ``dtype`` attribute or argument, and TensorFlow supports a wide range of data types for different use cases. The basic, single data types of `tf.dtypes `__ are as follows: .. list-table:: :header-rows: 1 * - Data type - Description - Since TensorFlow - Since ROCm * - ``bfloat16`` - 16-bit bfloat (brain floating point). - 1.0.0 - 1.7 * - ``bool`` - Boolean. - 1.0.0 - 1.7 * - ``complex128`` - 128-bit complex. - 1.0.0 - 1.7 * - ``complex64`` - 64-bit complex. - 1.0.0 - 1.7 * - ``double`` - 64-bit (double precision) floating-point. - 1.0.0 - 1.7 * - ``float16`` - 16-bit (half precision) floating-point. - 1.0.0 - 1.7 * - ``float32`` - 32-bit (single precision) floating-point. - 1.0.0 - 1.7 * - ``float64`` - 64-bit (double precision) floating-point. - 1.0.0 - 1.7 * - ``half`` - 16-bit (half precision) floating-point. - 2.0.0 - 2.0 * - ``int16`` - Signed 16-bit integer. - 1.0.0 - 1.7 * - ``int32`` - Signed 32-bit integer. - 1.0.0 - 1.7 * - ``int64`` - Signed 64-bit integer. - 1.0.0 - 1.7 * - ``int8`` - Signed 8-bit integer. - 1.0.0 - 1.7 * - ``qint16`` - Signed quantized 16-bit integer. - 1.0.0 - 1.7 * - ``qint32`` - Signed quantized 32-bit integer. - 1.0.0 - 1.7 * - ``qint8`` - Signed quantized 8-bit integer. - 1.0.0 - 1.7 * - ``quint16`` - Unsigned quantized 16-bit integer. - 1.0.0 - 1.7 * - ``quint8`` - Unsigned quantized 8-bit integer. - 1.0.0 - 1.7 * - ``resource`` - Handle to a mutable, dynamically allocated resource. - 1.0.0 - 1.7 * - ``string`` - Variable-length string, represented as byte array. - 1.0.0 - 1.7 * - ``uint16`` - Unsigned 16-bit (word) integer. - 1.0.0 - 1.7 * - ``uint32`` - Unsigned 32-bit (dword) integer. - 1.5.0 - 1.7 * - ``uint64`` - Unsigned 64-bit (qword) integer. - 1.5.0 - 1.7 * - ``uint8`` - Unsigned 8-bit (byte) integer. - 1.0.0 - 1.7 * - ``variant`` - Data of arbitrary type (known at runtime). - 1.4.0 - 1.7 Features --------------- This table provides an overview of key features in TensorFlow and their availability in ROCm. .. list-table:: :header-rows: 1 * - Module - Description - Since TensorFlow - Since ROCm * - ``tf.linalg`` (Linear Algebra) - Operations for matrix and tensor computations, such as ``tf.linalg.matmul`` (matrix multiplication), ``tf.linalg.inv`` (matrix inversion) and ``tf.linalg.cholesky`` (Cholesky decomposition). These leverage GPUs for high-performance linear algebra operations. - 1.4 - 1.8.2 * - ``tf.nn`` (Neural Network Operations) - GPU-accelerated building blocks for deep learning models, such as 2D convolutions with ``tf.nn.conv2d``, max pooling operations with ``tf.nn.max_pool``, activation functions like ``tf.nn.relu`` or softmax for output layers with ``tf.nn.softmax``. - 1.0 - 1.8.2 * - ``tf.image`` (Image Processing) - GPU-accelerated functions for image preprocessing and augmentations, such as resize images with ``tf.image.resize``, flip images horizontally with ``tf.image.flip_left_right`` and adjust image brightness randomly with ``tf.image.random_brightness``. - 1.1 - 1.8.2 * - ``tf.keras`` (High-Level API) - GPU acceleration for Keras layers and models, including dense layers (``tf.keras.layers.Dense``), convolutional layers (``tf.keras.layers.Conv2D``) and recurrent layers (``tf.keras.layers.LSTM``). - 1.4 - 1.8.2 * - ``tf.math`` (Mathematical Operations) - GPU-accelerated mathematical operations, such as sum across dimensions with ``tf.math.reduce_sum``, elementwise exponentiation with ``tf.math.exp`` and sigmoid activation (``tf.math.sigmoid``). - 1.5 - 1.8.2 * - ``tf.signal`` (Signal Processing) - Functions for spectral analysis and signal transformations. - 1.13 - 2.1 * - ``tf.data`` (Data Input Pipeline) - GPU-accelerated data preprocessing for efficient input pipelines, Prefetching with ``tf.data.experimental.AUTOTUNE``. GPU-enabled transformations like map and batch. - 1.4 - 1.8.2 * - ``tf.distribute`` (Distributed Training) - Enabling to scale computations across multiple devices on a single machine or across multiple machines. - 1.13 - 2.1 * - ``tf.random`` (Random Number Generation) - GPU-accelerated random number generation - 1.12 - 1.9.2 * - ``tf.TensorArray`` (Dynamic Array Operations) - Enables dynamic tensor manipulation on GPUs. - 1.0 - 1.8.2 * - ``tf.sparse`` (Sparse Tensor Operations) - GPU-accelerated sparse matrix manipulations. - 1.9 - 1.9.0 * - ``tf.experimental.numpy`` - GPU-accelerated NumPy-like API for numerical computations. - 2.4 - 4.1.1 * - ``tf.RaggedTensor`` - Handling of variable-length sequences and ragged tensors with GPU support. - 1.13 - 2.1 * - ``tf.function`` with XLA (Accelerated Linear Algebra) - Enable GPU-accelerated functions in optimization. - 1.14 - 2.4 * - ``tf.quantization`` - Quantized operations for inference, accelerated on GPUs. - 1.12 - 1.9.2 Distributed library features ----------------------------------- Enables developers to scale computations across multiple devices on a single machine or across multiple machines. .. list-table:: :header-rows: 1 * - Feature - Description - Since TensorFlow - Since ROCm * - ``MultiWorkerMirroredStrategy`` - Synchronous training across multiple workers using mirrored variables. - 2.0 - 3.0 * - ``MirroredStrategy`` - Synchronous training across multiple GPUs on one machine. - 1.5 - 2.5 * - ``TPUStrategy`` - Efficiently trains models on Google TPUs. - 1.9 - ❌ * - ``ParameterServerStrategy`` - Asynchronous training using parameter servers for variable management. - 2.1 - 4.0 * - ``CentralStorageStrategy`` - Keeps variables on a single device and performs computation on multiple devices. - 2.3 - 4.1 * - ``CollectiveAllReduceStrategy`` - Synchronous training across multiple devices and hosts. - 1.14 - 3.5 * - Distribution Strategies API - High-level API to simplify distributed training configuration and execution. - 1.10 - 3.0 Unsupported TensorFlow features =============================================================================== The following are GPU-accelerated TensorFlow features not currently supported by ROCm. .. list-table:: :header-rows: 1 * - Feature - Description - Since TensorFlow * - Mixed Precision with TF32 - Mixed precision with TF32 is used for matrix multiplications, convolutions, and other linear algebra operations, particularly in deep learning workloads like CNNs and transformers. - 2.4 * - ``tf.distribute.TPUStrategy`` - Efficiently trains models on Google TPUs. - 1.9 Use cases and recommendations =============================================================================== * The `Training a Neural Collaborative Filtering (NCF) Recommender on an AMD GPU `__ blog post discusses training an NCF recommender system using TensorFlow. It explains how NCF improves traditional collaborative filtering methods by leveraging neural networks to model non-linear user-item interactions. The post outlines the implementation using the recommenders library, focusing on the use of implicit data (for example, user interactions like viewing or purchasing) and how it addresses challenges like the lack of negative values. * The `Creating a PyTorch/TensorFlow code environment on AMD GPUs `__ blog post provides instructions for creating a machine learning environment for PyTorch and TensorFlow on AMD GPUs using ROCm. It covers steps like installing the libraries, cloning code repositories, installing dependencies, and troubleshooting potential issues with CUDA-based code. Additionally, it explains how to HIPify code (port CUDA code to HIP) and manage Docker images for a better experience on AMD GPUs. This guide aims to help data scientists and ML practitioners adapt their code for AMD GPUs. For more use cases and recommendations, see the `ROCm Tensorflow blog posts `__. --- :orphan: .. meta:: :description: verl compatibility :keywords: GPU, verl, deep learning, framework compatibility .. version-set:: rocm_version latest ******************************************************************************* verl compatibility ******************************************************************************* Volcano Engine Reinforcement Learning for LLMs (`verl `__) is a reinforcement learning framework designed for large language models (LLMs). verl offers a scalable, open-source fine-tuning solution by using a hybrid programming model that makes it easy to define and run complex post-training dataflows efficiently. Its modular APIs separate computation from data, allowing smooth integration with other frameworks. It also supports flexible model placement across GPUs for efficient scaling on different cluster sizes. verl achieves high training and generation throughput by building on existing LLM frameworks. Its 3D-HybridEngine reduces memory use and communication overhead when switching between training and inference, improving overall performance. Support overview ================================================================================ - The ROCm-supported version of verl is maintained in the official `https://github.com/ROCm/verl `__ repository, which differs from the `https://github.com/volcengine/verl `__ upstream repository. - To get started and install verl on ROCm, use the prebuilt :ref:`Docker image `, which includes ROCm, verl, and all required dependencies. - See the :doc:`ROCm verl installation guide ` for installation and setup instructions. - You can also consult the upstream `verl documentation `__ for additional context. Version support -------------------------------------------------------------------------------- verl is supported on `ROCm 7.0.0 `__ and `ROCm 6.2.0 `__. Supported devices -------------------------------------------------------------------------------- **Officially Supported**: AMD Instinct™ MI300X .. _verl-recommendations: Use cases and recommendations ================================================================================ * The benefits of verl in large-scale reinforcement learning from human feedback (RLHF) are discussed in the `Reinforcement Learning from Human Feedback on AMD GPUs with verl and ROCm Integration `__ blog. The blog post outlines how the Volcano Engine Reinforcement Learning (verl) framework integrates with the AMD ROCm platform to optimize training on AMD Instinct™ GPUs. The guide details the process of building a Docker image, setting up single-node and multi-node training environments, and highlights performance benchmarks demonstrating improved throughput and convergence accuracy. This resource serves as a comprehensive starting point for deploying verl on AMD GPUs, facilitating efficient RLHF training workflows. .. _verl-supported_features: Supported features =============================================================================== The following table shows verl on ROCm support for GPU-accelerated modules. .. list-table:: :header-rows: 1 * - Module - Description - verl version - ROCm version * - ``FSDP`` - Training engine - * 0.6.0 * 0.3.0.post0 - * 7.0.0 * 6.2.0 * - ``vllm`` - Inference engine - * 0.6.0 * 0.3.0.post0 - * 7.0.0 * 6.2.0 .. _verl-docker-compat: Docker image compatibility ================================================================================ .. |docker-icon| raw:: html AMD validates and publishes `verl Docker images `_ with ROCm backends on Docker Hub. The following Docker image tag and associated inventories represent the latest verl version from the official Docker Hub. Click |docker-icon| to view the image on Docker Hub. .. list-table:: :header-rows: 1 :class: docker-image-compatibility * - Docker image - ROCm - verl - Ubuntu - PyTorch - Python - vllm * - .. raw:: html rocm/verl - `7.0.0 `__ - `0.6.0 `__ - 22.04 - `2.9.0 `__ - `3.12.11 `__ - `0.11.0 `__ * - .. raw:: html rocm/verl - `6.2.0 `__ - `0.3.0.post0 `__ - 20.04 - `2.5.0 `__ - `3.9.19 `__ - `0.6.3 `__ Previous versions =============================================================================== See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/verl-history` to find documentation for previous releases of the ``ROCm/verl`` Docker image. --- .. meta:: :description: Using CMake :keywords: CMake, dependencies, HIP, C++, AMD, ROCm ********************************* Using CMake ********************************* Most components in ROCm support CMake. Projects depending on header-only or library components typically require CMake 3.5 or higher whereas those wanting to make use of the CMake HIP language support will require CMake 3.21 or higher. Finding dependencies ==================== .. note:: For a complete reference on how to deal with dependencies in CMake, refer to the CMake docs on `find_package `_ and the `Using Dependencies Guide `_ to get an overview of CMake related facilities. In short, CMake supports finding dependencies in two ways: * In Module mode, it consults a file ``Find.cmake`` which tries to find the component in typical install locations and layouts. CMake ships a few dozen such scripts, but users and projects may ship them as well. * In Config mode, it locates a file named ``-config.cmake`` or ``Config.cmake`` which describes the installed component in all regards needed to consume it. ROCm predominantly relies on Config mode, one notable exception being the Module driving the compilation of HIP programs on NVIDIA runtimes. As such, when dependencies are not found in standard system locations, one either has to instruct CMake to search for package config files in additional folders using the ``CMAKE_PREFIX_PATH`` variable (a semi-colon separated list of file system paths), or using ``_ROOT`` variable on a project-specific basis. There are nearly a dozen ways to set these variables. One may be more convenient over the other depending on your workflow. Conceptually the simplest is adding it to your CMake configuration command on the command line via ``-D CMAKE_PREFIX_PATH=....`` . AMD packaged ROCm installs can typically be added to the config file search paths such as: * Windows: ``-D CMAKE_PREFIX_PATH=${env:HIP_PATH}`` * Linux: ``-D CMAKE_PREFIX_PATH=/opt/rocm`` ROCm provides the respective *config-file* packages, and this enables ``find_package`` to be used directly. ROCm does not require any Find module as the *config-file* packages are shipped with the upstream projects, such as rocPRIM and other ROCm libraries. For a complete guide on where and how ROCm may be installed on a system, refer to the installation guides for `Linux `_ and `Windows `_. Using HIP in CMake ================== ROCm components providing a C/C++ interface support consumption via any C/C++ toolchain that CMake knows how to drive. ROCm also supports the CMake HIP language features, allowing users to program using the HIP single-source programming model. When a program (or translation-unit) uses the HIP API without compiling any GPU device code, HIP can be treated in CMake as a simple C/C++ library. Using the HIP single-source programming model --------------------------------------------- Source code written in the HIP dialect of C++ typically uses the `.hip` extension. When the HIP CMake language is enabled, it will automatically associate such source files with the HIP toolchain being used. .. code-block:: cmake cmake_minimum_required(VERSION 3.21) # HIP language support requires 3.21 cmake_policy(VERSION 3.21.3...3.27) project(MyProj LANGUAGES HIP) add_executable(MyApp Main.hip) Should you have existing CUDA code that is from the source compatible subset of HIP, you can tell CMake that despite their `.cu` extension, they're HIP sources. Do note that this mostly facilitates compiling kernel code-only source files, as host-side CUDA API won't compile in this fashion. .. code-block:: cmake add_library(MyLib MyLib.cu) set_source_files_properties(MyLib.cu PROPERTIES LANGUAGE HIP) CMake itself only hosts part of the HIP language support, such as defining HIP-specific properties, etc. while the other half ships with the HIP implementation, such as ROCm. CMake will search for a file `hip-lang-config.cmake` describing how the the properties defined by CMake translate to toolchain invocations. If one installs ROCm using non-standard methods or layouts and CMake can't locate this file or detect parts of the SDK, there's a catch-all, last resort variable consulted locating this file, ``-D CMAKE_HIP_COMPILER_ROCM_ROOT:PATH=`` which should be set the root of the ROCm installation. .. note:: Imported targets defined by `hip-lang-config.cmake` are for internal use only. If the user doesn't provide a semi-colon delimited list of device architectures via ``CMAKE_HIP_ARCHITECTURES``, CMake will select some sensible default. It is advised though that if a user knows what devices they wish to target, then set this variable explicitly. Consuming ROCm C/C++ libraries ------------------------------ Libraries such as rocBLAS, rocFFT, MIOpen, etc. behave as C/C++ libraries. Illustrated in the example below is a C++ application using MIOpen from CMake. It calls ``find_package(miopen)``, which provides the ``MIOpen`` imported target. This can be linked with ``target_link_libraries`` .. code-block:: cmake cmake_minimum_required(VERSION 3.5) # find_package(miopen) requires 3.5 cmake_policy(VERSION 3.5...3.27) project(MyProj LANGUAGES CXX) find_package(miopen) add_library(MyLib ...) target_link_libraries(MyLib PUBLIC MIOpen) .. note:: Most libraries are designed as host-only API, so using a GPU device compiler is not necessary for downstream projects unless they use GPU device code. Consuming the HIP API in C++ code --------------------------------- Consuming the HIP API without compiling single-source GPU device code can be done using any C++ compiler. The ``find_package(hip)`` provides the ``hip::host`` imported target to use HIP in this scenario. .. code-block:: cmake cmake_minimum_required(VERSION 3.5) # find_package(hip) requires 3.5 cmake_policy(VERSION 3.5...3.27) project(MyProj LANGUAGES CXX) find_package(hip REQUIRED) add_executable(MyApp ...) target_link_libraries(MyApp PRIVATE hip::host) When mixing such ``CXX`` sources with ``HIP`` sources holding device-code, link only to `hip::host`. If HIP sources don't have `.hip` as their extension, use `set_source_files_properties(... PROPERTIES LANGUAGE HIP)` on them. Linking to `hip::host` will set all the necessary flags for the ``CXX`` sources while ``HIP`` sources inherit all flags from the built-in language support. Having HIP sources in a target will turn the |LINK_LANG|_ into ``HIP``. .. |LINK_LANG| replace:: ``LINKER_LANGUAGE`` .. _LINK_LANG: https://cmake.org/cmake/help/latest/prop_tgt/LINKER_LANGUAGE.html Compiling device code in C++ language mode ------------------------------------------ .. attention:: The workflow detailed here is considered legacy and is shown for understanding's sake. It pre-dates the existence of HIP language support in CMake. If source code has HIP device code in it, it is a HIP source file and should be compiled as such. Only resort to the method below if your HIP-enabled CMake code path can't mandate CMake version 3.21. If code uses the HIP API and compiles GPU device code, it requires using a device compiler. The compiler for CMake can be set using either the ``CMAKE_C_COMPILER`` and ``CMAKE_CXX_COMPILER`` variable or using the ``CC`` and ``CXX`` environment variables. This can be set when configuring CMake or put into a CMake toolchain file. The device compiler must be set to a compiler that supports AMD GPU targets, which is usually Clang. The ``find_package(hip)`` provides the ``hip::device`` imported target to add all the flags necessary for device compilation. .. code-block:: cmake cmake_minimum_required(VERSION 3.8) # cxx_std_11 requires 3.8 cmake_policy(VERSION 3.8...3.27) project(MyProj LANGUAGES CXX) find_package(hip REQUIRED) add_library(MyLib ...) target_link_libraries(MyLib PRIVATE hip::device) target_compile_features(MyLib PRIVATE cxx_std_11) .. note:: Compiling for the GPU device requires at least C++11. This project can then be configured with the following CMake commands: * Windows: ``cmake -D CMAKE_CXX_COMPILER:PATH=${env:HIP_PATH}\bin\clang++.exe`` * Linux: ``cmake -D CMAKE_CXX_COMPILER:PATH=/opt/rocm/bin/amdclang++`` Which use the device compiler provided from the binary packages of `ROCm HIP SDK `_ and `repo.radeon.com `_ respectively. When using the ``CXX`` language support to compile HIP device code, selecting the target GPU architectures is done via setting the ``GPU_TARGETS`` variable. ``CMAKE_HIP_ARCHITECTURES`` only exists when the HIP language is enabled. By default, this is set to some subset of the currently supported architectures of AMD ROCm. It can be set to the CMake option ``-D GPU_TARGETS="gfx1032;gfx1035"``. ROCm CMake packages ------------------- +-----------+----------+--------------------------------------------------------+ | Component | Package | Targets | +===========+==========+========================================================+ | HIP | hip | ``hip::host``, ``hip::device`` | +-----------+----------+--------------------------------------------------------+ | rocPRIM | rocprim | ``roc::rocprim`` | +-----------+----------+--------------------------------------------------------+ | rocThrust | rocthrust| ``roc::rocthrust`` | +-----------+----------+--------------------------------------------------------+ | hipCUB | hipcub | ``hip::hipcub`` | +-----------+----------+--------------------------------------------------------+ | rocRAND | rocrand | ``roc::rocrand`` | +-----------+----------+--------------------------------------------------------+ | rocBLAS | rocblas | ``roc::rocblas`` | +-----------+----------+--------------------------------------------------------+ | rocSOLVER | rocsolver| ``roc::rocsolver`` | +-----------+----------+--------------------------------------------------------+ | hipBLAS | hipblas | ``roc::hipblas`` | +-----------+----------+--------------------------------------------------------+ | rocFFT | rocfft | ``roc::rocfft`` | +-----------+----------+--------------------------------------------------------+ | hipFFT | hipfft | ``hip::hipfft`` | +-----------+----------+--------------------------------------------------------+ | rocSPARSE | rocsparse| ``roc::rocsparse`` | +-----------+----------+--------------------------------------------------------+ | hipSPARSE | hipsparse| ``roc::hipsparse`` | +-----------+----------+--------------------------------------------------------+ | rocALUTION|rocalution| ``roc::rocalution`` | +-----------+----------+--------------------------------------------------------+ | RCCL | rccl | ``rccl`` | +-----------+----------+--------------------------------------------------------+ | MIOpen | miopen | ``MIOpen`` | +-----------+----------+--------------------------------------------------------+ | MIGraphX | migraphx | ``migraphx::migraphx``, ``migraphx::migraphx_c``, | | | | ``migraphx::migraphx_cpu``, ``migraphx::migraphx_gpu``,| | | | ``migraphx::migraphx_onnx``, ``migraphx::migraphx_tf`` | +-----------+----------+--------------------------------------------------------+ Using CMake presets =================== CMake command lines depending on how specific users like to be when compiling code can grow to unwieldy lengths. This is the primary reason why projects tend to bake script snippets into their build definitions controlling compiler warning levels, changing CMake defaults (``CMAKE_BUILD_TYPE`` or ``BUILD_SHARED_LIBS`` just to name a few) and all sorts anti-patterns, all in the name of convenience. Load on the command-line interface (CLI) starts immediately by selecting a toolchain, the set of utilities used to compile programs. To ease some of the toolchain related pains, CMake does consult the ``CC`` and ``CXX`` environmental variables when setting a default ``CMAKE_C[XX]_COMPILER`` respectively, but that is just the tip of the iceberg. There's a fair number of variables related to just the toolchain itself (typically supplied using `toolchain files `_ ), and then we still haven't talked about user preference or project-specific options. IDEs supporting CMake (Visual Studio, Visual Studio Code, CLion, etc.) all came up with their own way to register command-line fragments of different purpose in a setup-and-forget fashion for quick assembly using graphical front-ends. This is all nice, but configurations aren't portable, nor can they be reused in Continuous Integration (CI) pipelines. CMake has condensed existing practice into a portable JSON format that works in all IDEs and can be invoked from any command line. This is `CMake Presets `_. There are two types of preset files: one supplied by the project, called ``CMakePresets.json`` which is meant to be committed to version control, typically used to drive CI; and one meant for the user to provide, called ``CMakeUserPresets.json``, typically used to house user preference and adapting the build to the user's environment. These JSON files are allowed to include other JSON files and the user presets always implicitly includes the non-user variant. Using HIP with presets ---------------------- Following is an example ``CMakeUserPresets.json`` file which actually compiles the `amd/rocm-examples `_ suite of sample applications on a typical ROCm installation: .. code-block:: json { "version": 3, "cmakeMinimumRequired": { "major": 3, "minor": 21, "patch": 0 }, "configurePresets": [ { "name": "layout", "hidden": true, "binaryDir": "${sourceDir}/build/${presetName}", "installDir": "${sourceDir}/install/${presetName}" }, { "name": "generator-ninja-multi-config", "hidden": true, "generator": "Ninja Multi-Config" }, { "name": "toolchain-makefiles-c/c++-amdclang", "hidden": true, "cacheVariables": { "CMAKE_C_COMPILER": "/opt/rocm/bin/amdclang", "CMAKE_CXX_COMPILER": "/opt/rocm/bin/amdclang++", "CMAKE_HIP_COMPILER": "/opt/rocm/bin/amdclang++" } }, { "name": "clang-strict-iso-high-warn", "hidden": true, "cacheVariables": { "CMAKE_C_FLAGS": "-Wall -Wextra -pedantic", "CMAKE_CXX_FLAGS": "-Wall -Wextra -pedantic", "CMAKE_HIP_FLAGS": "-Wall -Wextra -pedantic" } }, { "name": "ninja-mc-rocm", "displayName": "Ninja Multi-Config ROCm", "inherits": [ "layout", "generator-ninja-multi-config", "toolchain-makefiles-c/c++-amdclang", "clang-strict-iso-high-warn" ] } ], "buildPresets": [ { "name": "ninja-mc-rocm-debug", "displayName": "Debug", "configuration": "Debug", "configurePreset": "ninja-mc-rocm" }, { "name": "ninja-mc-rocm-release", "displayName": "Release", "configuration": "Release", "configurePreset": "ninja-mc-rocm" }, { "name": "ninja-mc-rocm-debug-verbose", "displayName": "Debug (verbose)", "configuration": "Debug", "configurePreset": "ninja-mc-rocm", "verbose": true }, { "name": "ninja-mc-rocm-release-verbose", "displayName": "Release (verbose)", "configuration": "Release", "configurePreset": "ninja-mc-rocm", "verbose": true } ], "testPresets": [ { "name": "ninja-mc-rocm-debug", "displayName": "Debug", "configuration": "Debug", "configurePreset": "ninja-mc-rocm", "execution": { "jobs": 0 } }, { "name": "ninja-mc-rocm-release", "displayName": "Release", "configuration": "Release", "configurePreset": "ninja-mc-rocm", "execution": { "jobs": 0 } } ] } .. note:: Getting presets to work reliably on Windows requires some CMake improvements and/or support from compiler vendors. (Refer to `Add support to the Visual Studio generators `_ and `Sourcing environment scripts `_ .) --- .. meta:: :description: MI300 and MI200 Series performance counters and metrics :keywords: MI300, MI200, performance counters, command processor counters *************************************************************************************************** MI300 and MI200 Series performance counters and metrics *************************************************************************************************** This document lists and describes the hardware performance counters and derived metrics available for the AMD Instinct™ MI300 and MI200 GPU. You can also access this information using the :doc:`ROCprofiler-SDK `. MI300 and MI200 Series performance counters =============================================================== Series performance counters include the following categories: * :ref:`command-processor-counters` * :ref:`graphics-register-bus-manager-counters` * :ref:`spi-counters` * :ref:`compute-unit-counters` * :ref:`l1i-and-sl1d-cache-counters` * :ref:`vector-l1-cache-subsystem-counters` * :ref:`l2-cache-access-counters` The following sections provide additional details for each category. .. note:: Preliminary validation of all MI300 and MI200 Series performance counters is in progress. Those with an asterisk (*) require further evaluation. .. _command-processor-counters: Command processor counters --------------------------------------------------------------------------------------------------------------- Command processor counters are further classified into command processor-fetcher and command processor-compute. Command processor-fetcher counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``CPF_CMP_UTCL1_STALL_ON_TRANSLATION``", "Cycles", "Number of cycles one of the compute unified translation caches (L1) is stalled waiting on translation" "``CPF_CPF_STAT_BUSY``", "Cycles", "Number of cycles command processor-fetcher is busy" "``CPF_CPF_STAT_IDLE``", "Cycles", "Number of cycles command processor-fetcher is idle" "``CPF_CPF_STAT_STALL``", "Cycles", "Number of cycles command processor-fetcher is stalled" "``CPF_CPF_TCIU_BUSY``", "Cycles", "Number of cycles command processor-fetcher texture cache interface unit interface is busy" "``CPF_CPF_TCIU_IDLE``", "Cycles", "Number of cycles command processor-fetcher texture cache interface unit interface is idle" "``CPF_CPF_TCIU_STALL``", "Cycles", "Number of cycles command processor-fetcher texture cache interface unit interface is stalled waiting on free tags" The texture cache interface unit is the interface between the command processor and the memory system. Command processor-compute counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``CPC_ME1_BUSY_FOR_PACKET_DECODE``", "Cycles", "Number of cycles command processor-compute micro engine is busy decoding packets" "``CPC_UTCL1_STALL_ON_TRANSLATION``", "Cycles", "Number of cycles one of the unified translation caches (L1) is stalled waiting on translation" "``CPC_CPC_STAT_BUSY``", "Cycles", "Number of cycles command processor-compute is busy" "``CPC_CPC_STAT_IDLE``", "Cycles", "Number of cycles command processor-compute is idle" "``CPC_CPC_STAT_STALL``", "Cycles", "Number of cycles command processor-compute is stalled" "``CPC_CPC_TCIU_BUSY``", "Cycles", "Number of cycles command processor-compute texture cache interface unit interface is busy" "``CPC_CPC_TCIU_IDLE``", "Cycles", "Number of cycles command processor-compute texture cache interface unit interface is idle" "``CPC_CPC_UTCL2IU_BUSY``", "Cycles", "Number of cycles command processor-compute unified translation cache (L2) interface is busy" "``CPC_CPC_UTCL2IU_IDLE``", "Cycles", "Number of cycles command processor-compute unified translation cache (L2) interface is idle" "``CPC_CPC_UTCL2IU_STALL``", "Cycles", "Number of cycles command processor-compute unified translation cache (L2) interface is stalled" "``CPC_ME1_DC0_SPI_BUSY``", "Cycles", "Number of cycles command processor-compute micro engine processor is busy" The micro engine runs packet-processing firmware on the command processor-compute counter. .. _graphics-register-bus-manager-counters: Graphics register bus manager counters --------------------------------------------------------------------------------------------------------------- .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``GRBM_COUNT``", "Cycles","Number of free-running GPU cycles" "``GRBM_GUI_ACTIVE``", "Cycles", "Number of GPU active cycles" "``GRBM_CP_BUSY``", "Cycles", "Number of cycles any of the command processor blocks are busy" "``GRBM_SPI_BUSY``", "Cycles", "Number of cycles any of the shader processor input is busy in the shader engines" "``GRBM_TA_BUSY``", "Cycles", "Number of cycles any of the texture addressing unit is busy in the shader engines" "``GRBM_TC_BUSY``", "Cycles", "Number of cycles any of the texture cache blocks are busy" "``GRBM_CPC_BUSY``", "Cycles", "Number of cycles the command processor-compute is busy" "``GRBM_CPF_BUSY``", "Cycles", "Number of cycles the command processor-fetcher is busy" "``GRBM_UTCL2_BUSY``", "Cycles", "Number of cycles the unified translation cache (Level 2 [L2]) block is busy" "``GRBM_EA_BUSY``", "Cycles", "Number of cycles the efficiency arbiter block is busy" Texture cache blocks include: * Texture cache arbiter * Texture cache per pipe, also known as vector Level 1 (L1) cache * Texture cache per channel, also known as known as L2 cache * Texture cache interface .. _spi-counters: Shader processor input counters --------------------------------------------------------------------------------------------------------------- .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SPI_CSN_BUSY``", "Cycles", "Number of cycles with outstanding waves" "``SPI_CSN_WINDOW_VALID``", "Cycles", "Number of cycles enabled by ``perfcounter_start`` event" "``SPI_CSN_NUM_THREADGROUPS``", "Workgroups", "Number of dispatched workgroups" "``SPI_CSN_WAVE``", "Wavefronts", "Number of dispatched wavefronts" "``SPI_RA_REQ_NO_ALLOC``", "Cycles", "Number of arbiter cycles with requests but no allocation" "``SPI_RA_REQ_NO_ALLOC_CSN``", "Cycles", "Number of arbiter cycles with compute shader (n\ :sup:`th` pipe) requests but no compute shader (n\ :sup:`th` pipe) allocation" "``SPI_RA_RES_STALL_CSN``", "Cycles", "Number of arbiter stall cycles due to shortage of compute shader (n\ :sup:`th` pipe) pipeline slots" "``SPI_RA_TMP_STALL_CSN``", "Cycles", "Number of stall cycles due to shortage of temp space" "``SPI_RA_WAVE_SIMD_FULL_CSN``", "SIMD-cycles", "Accumulated number of single instruction, multiple data (SIMD) per cycle affected by shortage of wave slots for compute shader (n\ :sup:`th` pipe) wave dispatch" "``SPI_RA_VGPR_SIMD_FULL_CSN``", "SIMD-cycles", "Accumulated number of SIMDs per cycle affected by shortage of vector general-purpose register (VGPR) slots for compute shader (n\ :sup:`th` pipe) wave dispatch" "``SPI_RA_SGPR_SIMD_FULL_CSN``", "SIMD-cycles", "Accumulated number of SIMDs per cycle affected by shortage of scalar general-purpose register (SGPR) slots for compute shader (n\ :sup:`th` pipe) wave dispatch" "``SPI_RA_LDS_CU_FULL_CSN``", "CU", "Number of compute units affected by shortage of local data share (LDS) space for compute shader (n\ :sup:`th` pipe) wave dispatch" "``SPI_RA_BAR_CU_FULL_CSN``", "CU", "Number of compute units with compute shader (n\ :sup:`th` pipe) waves waiting at a BARRIER" "``SPI_RA_BULKY_CU_FULL_CSN``", "CU", "Number of compute units with compute shader (n\ :sup:`th` pipe) waves waiting for BULKY resource" "``SPI_RA_TGLIM_CU_FULL_CSN``", "Cycles", "Number of compute shader (n\ :sup:`th` pipe) wave stall cycles due to restriction of ``tg_limit`` for thread group size" "``SPI_RA_WVLIM_STALL_CSN``", "Cycles", "Number of cycles compute shader (n\ :sup:`th` pipe) is stalled due to ``WAVE_LIMIT``" "``SPI_VWC_CSC_WR``", "Qcycles", "Number of quad-cycles taken to initialize VGPRs when launching waves" "``SPI_SWC_CSC_WR``", "Qcycles", "Number of quad-cycles taken to initialize SGPRs when launching waves" .. _compute-unit-counters: Compute unit counters --------------------------------------------------------------------------------------------------------------- The compute unit counters are further classified into instruction mix, matrix fused multiply-add (FMA) operation counters, level counters, wavefront counters, wavefront cycle counters, and LDS counters. Instruction mix ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_INSTS``", "Instr", "Number of instructions issued" "``SQ_INSTS_VALU``", "Instr", "Number of vector arithmetic logic unit (VALU) instructions including matrix FMA issued" "``SQ_INSTS_VALU_ADD_F16``", "Instr", "Number of VALU half-precision floating-point (F16) ``ADD`` or ``SUB`` instructions issued" "``SQ_INSTS_VALU_MUL_F16``", "Instr", "Number of VALU F16 Multiply instructions issued" "``SQ_INSTS_VALU_FMA_F16``", "Instr", "Number of VALU F16 FMA or multiply-add instructions issued" "``SQ_INSTS_VALU_TRANS_F16``", "Instr", "Number of VALU F16 Transcendental instructions issued" "``SQ_INSTS_VALU_ADD_F32``", "Instr", "Number of VALU full-precision floating-point (F32) ``ADD`` or ``SUB`` instructions issued" "``SQ_INSTS_VALU_MUL_F32``", "Instr", "Number of VALU F32 Multiply instructions issued" "``SQ_INSTS_VALU_FMA_F32``", "Instr", "Number of VALU F32 FMAor multiply-add instructions issued" "``SQ_INSTS_VALU_TRANS_F32``", "Instr", "Number of VALU F32 Transcendental instructions issued" "``SQ_INSTS_VALU_ADD_F64``", "Instr", "Number of VALU F64 ``ADD`` or ``SUB`` instructions issued" "``SQ_INSTS_VALU_MUL_F64``", "Instr", "Number of VALU F64 Multiply instructions issued" "``SQ_INSTS_VALU_FMA_F64``", "Instr", "Number of VALU F64 FMA or multiply-add instructions issued" "``SQ_INSTS_VALU_TRANS_F64``", "Instr", "Number of VALU F64 Transcendental instructions issued" "``SQ_INSTS_VALU_INT32``", "Instr", "Number of VALU 32-bit integer instructions (signed or unsigned) issued" "``SQ_INSTS_VALU_INT64``", "Instr", "Number of VALU 64-bit integer instructions (signed or unsigned) issued" "``SQ_INSTS_VALU_CVT``", "Instr", "Number of VALU Conversion instructions issued" "``SQ_INSTS_VALU_MFMA_I8``", "Instr", "Number of 8-bit Integer matrix FMA instructions issued" "``SQ_INSTS_VALU_MFMA_F16``", "Instr", "Number of F16 matrix FMA instructions issued" "``SQ_INSTS_VALU_MFMA_F32``", "Instr", "Number of F32 matrix FMA instructions issued" "``SQ_INSTS_VALU_MFMA_F64``", "Instr", "Number of F64 matrix FMA instructions issued" "``SQ_INSTS_MFMA``", "Instr", "Number of matrix FMA instructions issued" "``SQ_INSTS_VMEM_WR``", "Instr", "Number of vector memory write instructions (including flat) issued" "``SQ_INSTS_VMEM_RD``", "Instr", "Number of vector memory read instructions (including flat) issued" "``SQ_INSTS_VMEM``", "Instr", "Number of vector memory instructions issued, including both flat and buffer instructions" "``SQ_INSTS_SALU``", "Instr", "Number of scalar arithmetic logic unit (SALU) instructions issued" "``SQ_INSTS_SMEM``", "Instr", "Number of scalar memory instructions issued" "``SQ_INSTS_SMEM_NORM``", "Instr", "Number of scalar memory instructions normalized to match ``smem_level`` issued" "``SQ_INSTS_FLAT``", "Instr", "Number of flat instructions issued" "``SQ_INSTS_FLAT_LDS_ONLY``", "Instr", "**MI200 Series only** Number of FLAT instructions that read/write only from/to LDS issued. Works only if ``EARLY_TA_DONE`` is enabled." "``SQ_INSTS_LDS``", "Instr", "Number of LDS instructions issued **(MI200: includes flat; MI300: does not include flat)**" "``SQ_INSTS_GDS``", "Instr", "Number of global data share instructions issued" "``SQ_INSTS_EXP_GDS``", "Instr", "Number of EXP and global data share instructions excluding skipped export instructions issued" "``SQ_INSTS_BRANCH``", "Instr", "Number of branch instructions issued" "``SQ_INSTS_SENDMSG``", "Instr", "Number of ``SENDMSG`` instructions including ``s_endpgm`` issued" "``SQ_INSTS_VSKIPPED``", "Instr", "Number of vector instructions skipped" Flat instructions allow read, write, and atomic access to a generic memory address pointer that can resolve to any of the following physical memories: * Global Memory * Scratch ("private") * LDS ("shared") * Invalid - ``MEM_VIOL`` TrapStatus Matrix fused multiply-add operation counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_INSTS_VALU_MFMA_MOPS_I8``", "IOP", "Number of 8-bit integer matrix FMA ops in the unit of 512" "``SQ_INSTS_VALU_MFMA_MOPS_F16``", "FLOP", "Number of F16 floating matrix FMA ops in the unit of 512" "``SQ_INSTS_VALU_MFMA_MOPS_BF16``", "FLOP", "Number of BF16 floating matrix FMA ops in the unit of 512" "``SQ_INSTS_VALU_MFMA_MOPS_F32``", "FLOP", "Number of F32 floating matrix FMA ops in the unit of 512" "``SQ_INSTS_VALU_MFMA_MOPS_F64``", "FLOP", "Number of F64 floating matrix FMA ops in the unit of 512" Level counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: All level counters must be followed by ``SQ_ACCUM_PREV_HIRES`` counter to measure average latency. .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_ACCUM_PREV``", "Count", "Accumulated counter sample value where accumulation takes place once every four cycles" "``SQ_ACCUM_PREV_HIRES``", "Count", "Accumulated counter sample value where accumulation takes place once every cycle" "``SQ_LEVEL_WAVES``", "Waves", "Number of inflight waves" "``SQ_INST_LEVEL_VMEM``", "Instr", "Number of inflight vector memory (including flat) instructions" "``SQ_INST_LEVEL_SMEM``", "Instr", "Number of inflight scalar memory instructions" "``SQ_INST_LEVEL_LDS``", "Instr", "Number of inflight LDS (including flat) instructions" "``SQ_IFETCH_LEVEL``", "Instr", "Number of inflight instruction fetch requests from the cache" Use the following formulae to calculate latencies: * Vector memory latency = ``SQ_ACCUM_PREV_HIRES`` divided by ``SQ_INSTS_VMEM`` * Wave latency = ``SQ_ACCUM_PREV_HIRES`` divided by ``SQ_WAVE`` * LDS latency = ``SQ_ACCUM_PREV_HIRES`` divided by ``SQ_INSTS_LDS`` * Scalar memory latency = ``SQ_ACCUM_PREV_HIRES`` divided by ``SQ_INSTS_SMEM_NORM`` * Instruction fetch latency = ``SQ_ACCUM_PREV_HIRES`` divided by ``SQ_IFETCH`` Wavefront counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_WAVES``", "Waves", "Number of wavefronts dispatched to sequencers, including both new and restored wavefronts" "``SQ_WAVES_SAVED``", "Waves", "Number of context-saved waves" "``SQ_WAVES_RESTORED``", "Waves", "Number of context-restored waves sent to sequencers" "``SQ_WAVES_EQ_64``", "Waves", "Number of wavefronts with exactly 64 active threads sent to sequencers" "``SQ_WAVES_LT_64``", "Waves", "Number of wavefronts with less than 64 active threads sent to sequencers" "``SQ_WAVES_LT_48``", "Waves", "Number of wavefronts with less than 48 active threads sent to sequencers" "``SQ_WAVES_LT_32``", "Waves", "Number of wavefronts with less than 32 active threads sent to sequencers" "``SQ_WAVES_LT_16``", "Waves", "Number of wavefronts with less than 16 active threads sent to sequencers" Wavefront cycle counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_CYCLES``", "Cycles", "Clock cycles" "``SQ_BUSY_CYCLES``", "Cycles", "Number of cycles while sequencers reports it to be busy" "``SQ_BUSY_CU_CYCLES``", "Qcycles", "Number of quad-cycles each compute unit is busy" "``SQ_VALU_MFMA_BUSY_CYCLES``", "Cycles", "Number of cycles the matrix FMA arithmetic logic unit (ALU) is busy" "``SQ_WAVE_CYCLES``", "Qcycles", "Number of quad-cycles spent by waves in the compute units" "``SQ_WAIT_ANY``", "Qcycles", "Number of quad-cycles spent waiting for anything" "``SQ_WAIT_INST_ANY``", "Qcycles", "Number of quad-cycles spent waiting for any instruction to be issued" "``SQ_ACTIVE_INST_ANY``", "Qcycles", "Number of quad-cycles spent by each wave to work on an instruction" "``SQ_ACTIVE_INST_VMEM``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on a vector memory instruction" "``SQ_ACTIVE_INST_LDS``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on an LDS instruction" "``SQ_ACTIVE_INST_VALU``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on a VALU instruction" "``SQ_ACTIVE_INST_SCA``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on a SALU or scalar memory instruction" "``SQ_ACTIVE_INST_EXP_GDS``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on an ``EXPORT`` or ``GDS`` instruction" "``SQ_ACTIVE_INST_MISC``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on a ``BRANCH`` or ``SENDMSG`` instruction" "``SQ_ACTIVE_INST_FLAT``", "Qcycles", "Number of quad-cycles spent by the sequencer instruction arbiter to work on a flat instruction" "``SQ_INST_CYCLES_VMEM_WR``", "Qcycles", "Number of quad-cycles spent to send addr and cmd data for vector memory write instructions" "``SQ_INST_CYCLES_VMEM_RD``", "Qcycles", "Number of quad-cycles spent to send addr and cmd data for vector memory read instructions" "``SQ_INST_CYCLES_SMEM``", "Qcycles", "Number of quad-cycles spent to execute scalar memory reads" "``SQ_INST_CYCLES_SALU``", "Qcycles", "Number of quad-cycles spent to execute non-memory read scalar operations" "``SQ_THREAD_CYCLES_VALU``", "Qcycles", "Number of quad-cycles spent to execute VALU operations on active threads" "``SQ_WAIT_INST_LDS``", "Qcycles", "Number of quad-cycles spent waiting for LDS instruction to be issued" ``SQ_THREAD_CYCLES_VALU`` is similar to ``INST_CYCLES_VALU``, but it's multiplied by the number of active threads. LDS counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_LDS_ATOMIC_RETURN``", "Cycles", "Number of atomic return cycles in LDS" "``SQ_LDS_BANK_CONFLICT``", "Cycles", "Number of cycles LDS is stalled by bank conflicts" "``SQ_LDS_ADDR_CONFLICT``", "Cycles", "Number of cycles LDS is stalled by address conflicts" "``SQ_LDS_UNALIGNED_STALL``", "Cycles", "Number of cycles LDS is stalled processing flat unaligned load or store operations" "``SQ_LDS_MEM_VIOLATIONS``", "Count", "Number of threads that have a memory violation in the LDS" "``SQ_LDS_IDX_ACTIVE``", "Cycles", "Number of cycles LDS is used for indexed operations" Miscellaneous counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQ_IFETCH``", "Count", "Number of instruction fetch requests from L1i, in 32-byte width" "``SQ_ITEMS``", "Threads", "Number of valid items per wave" .. _l1i-and-sl1d-cache-counters: L1 instruction cache (L1i) and scalar L1 data cache (L1d) counters --------------------------------------------------------------------------------------------------------------- .. csv-table:: :header: "Hardware counter", "Unit", "Definition" "``SQC_ICACHE_REQ``", "Req", "Number of L1 instruction (L1i) cache requests" "``SQC_ICACHE_HITS``", "Count", "Number of L1i cache hits" "``SQC_ICACHE_MISSES``", "Count", "Number of non-duplicate L1i cache misses including uncached requests" "``SQC_ICACHE_MISSES_DUPLICATE``", "Count", "Number of duplicate L1i cache misses whose previous lookup miss on the same cache line is not fulfilled yet" "``SQC_DCACHE_REQ``", "Req", "Number of scalar L1d requests" "``SQC_DCACHE_INPUT_VALID_READYB``", "Cycles", "Number of cycles while sequencer input is valid but scalar L1d is not ready" "``SQC_DCACHE_HITS``", "Count", "Number of scalar L1d hits" "``SQC_DCACHE_MISSES``", "Count", "Number of non-duplicate scalar L1d misses including uncached requests" "``SQC_DCACHE_MISSES_DUPLICATE``", "Count", "Number of duplicate scalar L1d misses" "``SQC_DCACHE_REQ_READ_1``", "Req", "Number of constant cache read requests in a single 32-bit data word" "``SQC_DCACHE_REQ_READ_2``", "Req", "Number of constant cache read requests in two 32-bit data words" "``SQC_DCACHE_REQ_READ_4``", "Req", "Number of constant cache read requests in four 32-bit data words" "``SQC_DCACHE_REQ_READ_8``", "Req", "Number of constant cache read requests in eight 32-bit data words" "``SQC_DCACHE_REQ_READ_16``", "Req", "Number of constant cache read requests in 16 32-bit data words" "``SQC_DCACHE_ATOMIC``", "Req", "Number of atomic requests" "``SQC_TC_REQ``", "Req", "Number of texture cache requests that were issued by instruction and constant caches" "``SQC_TC_INST_REQ``", "Req", "Number of instruction requests to the L2 cache" "``SQC_TC_DATA_READ_REQ``", "Req", "Number of data Read requests to the L2 cache" "``SQC_TC_DATA_WRITE_REQ``", "Req", "Number of data write requests to the L2 cache" "``SQC_TC_DATA_ATOMIC_REQ``", "Req", "Number of data atomic requests to the L2 cache" "``SQC_TC_STALL``", "Cycles", "Number of cycles while the valid requests to the L2 cache are stalled" .. _vector-l1-cache-subsystem-counters: Vector L1 cache subsystem counters --------------------------------------------------------------------------------------------------------------- The vector L1 cache subsystem counters are further classified into texture addressing unit, texture data unit, vector L1d or texture cache per pipe, and texture cache arbiter counters. Texture addressing unit counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition", "Value range for ``n``" "``TA_TA_BUSY[n]``", "Cycles", "Texture addressing unit busy cycles", "0-15" "``TA_TOTAL_WAVEFRONTS[n]``", "Instr", "Number of wavefronts processed by texture addressing unit", "0-15" "``TA_BUFFER_WAVEFRONTS[n]``", "Instr", "Number of buffer wavefronts processed by texture addressing unit", "0-15" "``TA_BUFFER_READ_WAVEFRONTS[n]``", "Instr", "Number of buffer read wavefronts processed by texture addressing unit", "0-15" "``TA_BUFFER_WRITE_WAVEFRONTS[n]``", "Instr", "Number of buffer write wavefronts processed by texture addressing unit", "0-15" "``TA_BUFFER_ATOMIC_WAVEFRONTS[n]``", "Instr", "Number of buffer atomic wavefronts processed by texture addressing unit", "0-15" "``TA_BUFFER_TOTAL_CYCLES[n]``", "Cycles", "Number of buffer cycles (including read and write) issued to texture cache", "0-15" "``TA_BUFFER_COALESCED_READ_CYCLES[n]``", "Cycles", "Number of coalesced buffer read cycles issued to texture cache", "0-15" "``TA_BUFFER_COALESCED_WRITE_CYCLES[n]``", "Cycles", "Number of coalesced buffer write cycles issued to texture cache", "0-15" "``TA_ADDR_STALLED_BY_TC_CYCLES[n]``", "Cycles", "Number of cycles texture addressing unit address path is stalled by texture cache", "0-15" "``TA_DATA_STALLED_BY_TC_CYCLES[n]``", "Cycles", "Number of cycles texture addressing unit data path is stalled by texture cache", "0-15" "``TA_ADDR_STALLED_BY_TD_CYCLES[n]``", "Cycles", "Number of cycles texture addressing unit address path is stalled by texture data unit", "0-15" "``TA_FLAT_WAVEFRONTS[n]``", "Instr", "Number of flat opcode wavefronts processed by texture addressing unit", "0-15" "``TA_FLAT_READ_WAVEFRONTS[n]``", "Instr", "Number of flat opcode read wavefronts processed by texture addressing unit", "0-15" "``TA_FLAT_WRITE_WAVEFRONTS[n]``", "Instr", "Number of flat opcode write wavefronts processed by texture addressing unit", "0-15" "``TA_FLAT_ATOMIC_WAVEFRONTS[n]``", "Instr", "Number of flat opcode atomic wavefronts processed by texture addressing unit", "0-15" Texture data unit counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition", "Value range for ``n``" "``TD_TD_BUSY[n]``", "Cycle", "Texture data unit busy cycles while it is processing or waiting for data", "0-15" "``TD_TC_STALL[n]``", "Cycle", "Number of cycles texture data unit is stalled waiting for texture cache data", "0-15" "``TD_SPI_STALL[n]``", "Cycle", "Number of cycles texture data unit is stalled by shader processor input", "0-15" "``TD_LOAD_WAVEFRONT[n]``", "Instr", "Number of wavefront instructions (read, write, atomic)", "0-15" "``TD_STORE_WAVEFRONT[n]``", "Instr", "Number of write wavefront instructions", "0-15" "``TD_ATOMIC_WAVEFRONT[n]``", "Instr", "Number of atomic wavefront instructions", "0-15" "``TD_COALESCABLE_WAVEFRONT[n]``", "Instr", "Number of coalescable wavefronts according to texture addressing unit", "0-15" Texture cache per pipe counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition", "Value range for ``n``" "``TCP_GATE_EN1[n]``", "Cycles", "Number of cycles vector L1d interface clocks are turned on", "0-15" "``TCP_GATE_EN2[n]``", "Cycles", "Number of cycles vector L1d core clocks are turned on", "0-15" "``TCP_TD_TCP_STALL_CYCLES[n]``", "Cycles", "Number of cycles texture data unit stalls vector L1d", "0-15" "``TCP_TCR_TCP_STALL_CYCLES[n]``", "Cycles", "Number of cycles texture cache router stalls vector L1d", "0-15" "``TCP_READ_TAGCONFLICT_STALL_CYCLES[n]``", "Cycles", "Number of cycles tag RAM conflict stalls on a read", "0-15" "``TCP_WRITE_TAGCONFLICT_STALL_CYCLES[n]``", "Cycles", "Number of cycles tag RAM conflict stalls on a write", "0-15" "``TCP_ATOMIC_TAGCONFLICT_STALL_CYCLES[n]``", "Cycles", "Number of cycles tag RAM conflict stalls on an atomic", "0-15" "``TCP_PENDING_STALL_CYCLES[n]``", "Cycles", "Number of cycles vector L1d is stalled due to data pending from L2 Cache", "0-15" "``TCP_TCP_TA_DATA_STALL_CYCLES``", "Cycles", "Number of cycles texture cache per pipe stalls texture addressing unit data interface", "NA" "``TCP_TA_TCP_STATE_READ[n]``", "Req", "Number of state reads", "0-15" "``TCP_VOLATILE[n]``", "Req", "Number of L1 volatile pixels or buffers from texture addressing unit", "0-15" "``TCP_TOTAL_ACCESSES[n]``", "Req", "Number of vector L1d accesses. Equals ``TCP_PERF_SEL_TOTAL_READ`+`TCP_PERF_SEL_TOTAL_NONREAD``", "0-15" "``TCP_TOTAL_READ[n]``", "Req", "Number of vector L1d read accesses", "0-15" "``TCP_TOTAL_WRITE[n]``", "Req", "Number of vector L1d write accesses", "0-15" "``TCP_TOTAL_ATOMIC_WITH_RET[n]``", "Req", "Number of vector L1d atomic requests with return", "0-15" "``TCP_TOTAL_ATOMIC_WITHOUT_RET[n]``", "Req", "Number of vector L1d atomic without return", "0-15" "``TCP_TOTAL_WRITEBACK_INVALIDATES[n]``", "Count", "Total number of vector L1d writebacks and invalidates", "0-15" "``TCP_UTCL1_REQUEST[n]``", "Req", "Number of address translation requests to unified translation cache (L1)", "0-15" "``TCP_UTCL1_TRANSLATION_HIT[n]``", "Req", "Number of unified translation cache (L1) translation hits", "0-15" "``TCP_UTCL1_TRANSLATION_MISS[n]``", "Req", "Number of unified translation cache (L1) translation misses", "0-15" "``TCP_UTCL1_PERMISSION_MISS[n]``", "Req", "Number of unified translation cache (L1) permission misses", "0-15" "``TCP_TOTAL_CACHE_ACCESSES[n]``", "Req", "Number of vector L1d cache accesses including hits and misses", "0-15" "``TCP_TCP_LATENCY[n]``", "Cycles", "**MI200 Series only** Accumulated wave access latency to vL1D over all wavefronts", "0-15" "``TCP_TCC_READ_REQ_LATENCY[n]``", "Cycles", "**MI200 Series only** Total vL1D to L2 request latency over all wavefronts for reads and atomics with return", "0-15" "``TCP_TCC_WRITE_REQ_LATENCY[n]``", "Cycles", "**MI200 Series only** Total vL1D to L2 request latency over all wavefronts for writes and atomics without return", "0-15" "``TCP_TCC_READ_REQ[n]``", "Req", "Number of read requests to L2 cache", "0-15" "``TCP_TCC_WRITE_REQ[n]``", "Req", "Number of write requests to L2 cache", "0-15" "``TCP_TCC_ATOMIC_WITH_RET_REQ[n]``", "Req", "Number of atomic requests to L2 cache with return", "0-15" "``TCP_TCC_ATOMIC_WITHOUT_RET_REQ[n]``", "Req", "Number of atomic requests to L2 cache without return", "0-15" "``TCP_TCC_NC_READ_REQ[n]``", "Req", "Number of non-coherently cached read requests to L2 cache", "0-15" "``TCP_TCC_UC_READ_REQ[n]``", "Req", "Number of uncached read requests to L2 cache", "0-15" "``TCP_TCC_CC_READ_REQ[n]``", "Req", "Number of coherently cached read requests to L2 cache", "0-15" "``TCP_TCC_RW_READ_REQ[n]``", "Req", "Number of coherently cached with write read requests to L2 cache", "0-15" "``TCP_TCC_NC_WRITE_REQ[n]``", "Req", "Number of non-coherently cached write requests to L2 cache", "0-15" "``TCP_TCC_UC_WRITE_REQ[n]``", "Req", "Number of uncached write requests to L2 cache", "0-15" "``TCP_TCC_CC_WRITE_REQ[n]``", "Req", "Number of coherently cached write requests to L2 cache", "0-15" "``TCP_TCC_RW_WRITE_REQ[n]``", "Req", "Number of coherently cached with write write requests to L2 cache", "0-15" "``TCP_TCC_NC_ATOMIC_REQ[n]``", "Req", "Number of non-coherently cached atomic requests to L2 cache", "0-15" "``TCP_TCC_UC_ATOMIC_REQ[n]``", "Req", "Number of uncached atomic requests to L2 cache", "0-15" "``TCP_TCC_CC_ATOMIC_REQ[n]``", "Req", "Number of coherently cached atomic requests to L2 cache", "0-15" "``TCP_TCC_RW_ATOMIC_REQ[n]``", "Req", "Number of coherently cached with write atomic requests to L2 cache", "0-15" Note that: * ``TCP_TOTAL_READ[n]`` = ``TCP_PERF_SEL_TOTAL_HIT_LRU_READ`` + ``TCP_PERF_SEL_TOTAL_MISS_LRU_READ`` + ``TCP_PERF_SEL_TOTAL_MISS_EVICT_READ`` * ``TCP_TOTAL_WRITE[n]`` = ``TCP_PERF_SEL_TOTAL_MISS_LRU_WRITE``+ ``TCP_PERF_SEL_TOTAL_MISS_EVICT_WRITE`` * ``TCP_TOTAL_WRITEBACK_INVALIDATES[n]`` = ``TCP_PERF_SEL_TOTAL_WBINVL1``+ ``TCP_PERF_SEL_TOTAL_WBINVL1_VOL``+ ``TCP_PERF_SEL_CP_TCP_INVALIDATE``+ ``TCP_PERF_SEL_SQ_TCP_INVALIDATE_VOL`` Texture cache arbiter counters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Hardware counter", "Unit", "Definition", "Value range for ``n``" "``TCA_CYCLE[n]``", "Cycles", "Number of texture cache arbiter cycles", "0-31" "``TCA_BUSY[n]``", "Cycles", "Number of cycles texture cache arbiter has a pending request", "0-31" .. _l2-cache-access-counters: L2 cache access counters --------------------------------------------------------------------------------------------------------------- L2 cache is also known as texture cache per channel. .. tab-set:: .. tab-item:: MI300 hardware counter .. csv-table:: :header: "Hardware counter", "Unit", "Definition", "Value range for ``n``" "``TCC_CYCLE[n]``", "Cycles", "Number of L2 cache free-running clocks", "0-31" "``TCC_BUSY[n]``", "Cycles", "Number of L2 cache busy cycles", "0-31" "``TCC_REQ[n]``", "Req", "Number of L2 cache requests of all types (measured at the tag block)", "0-31" "``TCC_STREAMING_REQ[n]``", "Req", "Number of L2 cache streaming requests (measured at the tag block)", "0-31" "``TCC_NC_REQ[n]``", "Req", "Number of non-coherently cached requests (measured at the tag block)", "0-31" "``TCC_UC_REQ[n]``", "Req", "Number of uncached requests. This is measured at the tag block", "0-31" "``TCC_CC_REQ[n]``", "Req", "Number of coherently cached requests. This is measured at the tag block", "0-31" "``TCC_RW_REQ[n]``", "Req", "Number of coherently cached with write requests. This is measured at the tag block", "0-31" "``TCC_PROBE[n]``", "Req", "Number of probe requests", "0-31" "``TCC_PROBE_ALL[n]``", "Req", "Number of external probe requests with ``EA_TCC_preq_all == 1``", "0-31" "``TCC_READ[n]``", "Req", "Number of L2 cache read requests (includes compressed reads but not metadata reads)", "0-31" "``TCC_WRITE[n]``", "Req", "Number of L2 cache write requests", "0-31" "``TCC_ATOMIC[n]``", "Req", "Number of L2 cache atomic requests of all types", "0-31" "``TCC_HIT[n]``", "Req", "Number of L2 cache hits", "0-31" "``TCC_MISS[n]``", "Req", "Number of L2 cache misses", "0-31" "``TCC_WRITEBACK[n]``", "Req", "Number of lines written back to the main memory, including writebacks of dirty lines and uncached write or atomic requests", "0-31" "``TCC_EA0_WRREQ[n]``", "Req", "Number of 32-byte and 64-byte transactions going over the ``TC_EA_wrreq`` interface (doesn't include probe commands)", "0-31" "``TCC_EA0_WRREQ_64B[n]``", "Req", "Total number of 64-byte transactions (write or ``CMPSWAP``) going over the ``TC_EA_wrreq`` interface", "0-31" "``TCC_EA0_WR_UNCACHED_32B[n]``", "Req", "Number of 32 or 64-byte write or atomic going over the ``TC_EA_wrreq`` interface due to uncached traffic", "0-31" "``TCC_EA0_WRREQ_STALL[n]``", "Cycles", "Number of cycles a write request is stalled", "0-31" "``TCC_EA0_WRREQ_IO_CREDIT_STALL[n]``", "Cycles", "Number of cycles an efficiency arbiter write request is stalled due to the interface running out of input-output (IO) credits", "0-31" "``TCC_EA0_WRREQ_GMI_CREDIT_STALL[n]``", "Cycles", "Number of cycles an efficiency arbiter write request is stalled due to the interface running out of GMI credits", "0-31" "``TCC_EA0_WRREQ_DRAM_CREDIT_STALL[n]``", "Cycles", "Number of cycles an efficiency arbiter write request is stalled due to the interface running out of DRAM credits", "0-31" "``TCC_TOO_MANY_EA_WRREQS_STALL[n]``", "Cycles", "Number of cycles the L2 cache is unable to send an efficiency arbiter write request due to it reaching its maximum capacity of pending efficiency arbiter write requests", "0-31" "``TCC_EA0_WRREQ_LEVEL[n]``", "Req", "The accumulated number of efficiency arbiter write requests in flight", "0-31" "``TCC_EA0_ATOMIC[n]``", "Req", "Number of 32-byte or 64-byte atomic requests going over the ``TC_EA_wrreq`` interface", "0-31" "``TCC_EA0_ATOMIC_LEVEL[n]``", "Req", "The accumulated number of efficiency arbiter atomic requests in flight", "0-31" "``TCC_EA0_RDREQ[n]``", "Req", "Number of 32-byte or 64-byte read requests to efficiency arbiter", "0-31" "``TCC_EA0_RDREQ_32B[n]``", "Req", "Number of 32-byte read requests to efficiency arbiter", "0-31" "``TCC_EA0_RD_UNCACHED_32B[n]``", "Req", "Number of 32-byte efficiency arbiter reads due to uncached traffic. A 64-byte request is counted as 2", "0-31" "``TCC_EA0_RDREQ_IO_CREDIT_STALL[n]``", "Cycles", "Number of cycles there is a stall due to the read request interface running out of IO credits", "0-31" "``TCC_EA0_RDREQ_GMI_CREDIT_STALL[n]``", "Cycles", "Number of cycles there is a stall due to the read request interface running out of GMI credits", "0-31" "``TCC_EA0_RDREQ_DRAM_CREDIT_STALL[n]``", "Cycles", "Number of cycles there is a stall due to the read request interface running out of DRAM credits", "0-31" "``TCC_EA0_RDREQ_LEVEL[n]``", "Req", "The accumulated number of efficiency arbiter read requests in flight", "0-31" "``TCC_EA0_RDREQ_DRAM[n]``", "Req", "Number of 32-byte or 64-byte efficiency arbiter read requests to High Bandwidth Memory (HBM)", "0-31" "``TCC_EA0_WRREQ_DRAM[n]``", "Req", "Number of 32-byte or 64-byte efficiency arbiter write requests to HBM", "0-31" "``TCC_TAG_STALL[n]``", "Cycles", "Number of cycles the normal request pipeline in the tag is stalled for any reason", "0-31" "``TCC_NORMAL_WRITEBACK[n]``", "Req", "Number of writebacks due to requests that are not writeback requests", "0-31" "``TCC_ALL_TC_OP_WB_WRITEBACK[n]``", "Req", "Number of writebacks due to all ``TC_OP`` writeback requests", "0-31" "``TCC_NORMAL_EVICT[n]``", "Req", "Number of evictions due to requests that are not invalidate or probe requests", "0-31" "``TCC_ALL_TC_OP_INV_EVICT[n]``", "Req", "Number of evictions due to all ``TC_OP`` invalidate requests", "0-31" .. tab-item:: MI200 hardware counter .. csv-table:: :header: "Hardware counter", "Unit", "Definition", "Value range for ``n``" "``TCC_CYCLE[n]``", "Cycles", "Number of L2 cache free-running clocks", "0-31" "``TCC_BUSY[n]``", "Cycles", "Number of L2 cache busy cycles", "0-31" "``TCC_REQ[n]``", "Req", "Number of L2 cache requests of all types (measured at the tag block)", "0-31" "``TCC_STREAMING_REQ[n]``", "Req", "Number of L2 cache streaming requests (measured at the tag block)", "0-31" "``TCC_NC_REQ[n]``", "Req", "Number of non-coherently cached requests (measured at the tag block)", "0-31" "``TCC_UC_REQ[n]``", "Req", "Number of uncached requests. This is measured at the tag block", "0-31" "``TCC_CC_REQ[n]``", "Req", "Number of coherently cached requests. This is measured at the tag block", "0-31" "``TCC_RW_REQ[n]``", "Req", "Number of coherently cached with write requests. This is measured at the tag block", "0-31" "``TCC_PROBE[n]``", "Req", "Number of probe requests", "0-31" "``TCC_PROBE_ALL[n]``", "Req", "Number of external probe requests with ``EA_TCC_preq_all == 1``", "0-31" "``TCC_READ[n]``", "Req", "Number of L2 cache read requests (includes compressed reads but not metadata reads)", "0-31" "``TCC_WRITE[n]``", "Req", "Number of L2 cache write requests", "0-31" "``TCC_ATOMIC[n]``", "Req", "Number of L2 cache atomic requests of all types", "0-31" "``TCC_HIT[n]``", "Req", "Number of L2 cache hits", "0-31" "``TCC_MISS[n]``", "Req", "Number of L2 cache misses", "0-31" "``TCC_WRITEBACK[n]``", "Req", "Number of lines written back to the main memory, including writebacks of dirty lines and uncached write or atomic requests", "0-31" "``TCC_EA_WRREQ[n]``", "Req", "Number of 32-byte and 64-byte transactions going over the ``TC_EA_wrreq`` interface (doesn't include probe commands)", "0-31" "``TCC_EA_WRREQ_64B[n]``", "Req", "Total number of 64-byte transactions (write or ``CMPSWAP``) going over the ``TC_EA_wrreq`` interface", "0-31" "``TCC_EA_WR_UNCACHED_32B[n]``", "Req", "Number of 32 write or atomic going over the ``TC_EA_wrreq`` interface due to uncached traffic. A 64-byte request will be counted as 2", "0-31" "``TCC_EA_WRREQ_STALL[n]``", "Cycles", "Number of cycles a write request is stalled", "0-31" "``TCC_EA_WRREQ_IO_CREDIT_STALL[n]``", "Cycles", "Number of cycles an efficiency arbiter write request is stalled due to the interface running out of input-output (IO) credits", "0-31" "``TCC_EA_WRREQ_GMI_CREDIT_STALL[n]``", "Cycles", "Number of cycles an efficiency arbiter write request is stalled due to the interface running out of GMI credits", "0-31" "``TCC_EA_WRREQ_DRAM_CREDIT_STALL[n]``", "Cycles", "Number of cycles an efficiency arbiter write request is stalled due to the interface running out of DRAM credits", "0-31" "``TCC_TOO_MANY_EA_WRREQS_STALL[n]``", "Cycles", "Number of cycles the L2 cache is unable to send an efficiency arbiter write request due to it reaching its maximum capacity of pending efficiency arbiter write requests", "0-31" "``TCC_EA_WRREQ_LEVEL[n]``", "Req", "The accumulated number of efficiency arbiter write requests in flight", "0-31" "``TCC_EA_ATOMIC[n]``", "Req", "Number of 32-byte or 64-byte atomic requests going over the ``TC_EA_wrreq`` interface", "0-31" "``TCC_EA_ATOMIC_LEVEL[n]``", "Req", "The accumulated number of efficiency arbiter atomic requests in flight", "0-31" "``TCC_EA_RDREQ[n]``", "Req", "Number of 32-byte or 64-byte read requests to efficiency arbiter", "0-31" "``TCC_EA_RDREQ_32B[n]``", "Req", "Number of 32-byte read requests to efficiency arbiter", "0-31" "``TCC_EA_RD_UNCACHED_32B[n]``", "Req", "Number of 32-byte efficiency arbiter reads due to uncached traffic. A 64-byte request is counted as 2", "0-31" "``TCC_EA_RDREQ_IO_CREDIT_STALL[n]``", "Cycles", "Number of cycles there is a stall due to the read request interface running out of IO credits", "0-31" "``TCC_EA_RDREQ_GMI_CREDIT_STALL[n]``", "Cycles", "Number of cycles there is a stall due to the read request interface running out of GMI credits", "0-31" "``TCC_EA_RDREQ_DRAM_CREDIT_STALL[n]``", "Cycles", "Number of cycles there is a stall due to the read request interface running out of DRAM credits", "0-31" "``TCC_EA_RDREQ_LEVEL[n]``", "Req", "The accumulated number of efficiency arbiter read requests in flight", "0-31" "``TCC_EA_RDREQ_DRAM[n]``", "Req", "Number of 32-byte or 64-byte efficiency arbiter read requests to High Bandwidth Memory (HBM)", "0-31" "``TCC_EA_WRREQ_DRAM[n]``", "Req", "Number of 32-byte or 64-byte efficiency arbiter write requests to HBM", "0-31" "``TCC_TAG_STALL[n]``", "Cycles", "Number of cycles the normal request pipeline in the tag is stalled for any reason", "0-31" "``TCC_NORMAL_WRITEBACK[n]``", "Req", "Number of writebacks due to requests that are not writeback requests", "0-31" "``TCC_ALL_TC_OP_WB_WRITEBACK[n]``", "Req", "Number of writebacks due to all ``TC_OP`` writeback requests", "0-31" "``TCC_NORMAL_EVICT[n]``", "Req", "Number of evictions due to requests that are not invalidate or probe requests", "0-31" "``TCC_ALL_TC_OP_INV_EVICT[n]``", "Req", "Number of evictions due to all ``TC_OP`` invalidate requests", "0-31" Note the following: * ``TCC_REQ[n]`` may be more than the number of requests arriving at the texture cache per channel, but it's a good indication of the total amount of work that needs to be performed. * For ``TCC_EA0_WRREQ[n]``, atomics may travel over the same interface and are generally classified as write requests. * CC mtypes can produce uncached requests, and those are included in ``TCC_EA0_WR_UNCACHED_32B[n]`` * ``TCC_EA0_WRREQ_LEVEL[n]`` is primarily intended to measure average efficiency arbiter write latency. * Average write latency = ``TCC_PERF_SEL_EA0_WRREQ_LEVEL`` divided by ``TCC_PERF_SEL_EA0_WRREQ`` * ``TCC_EA0_ATOMIC_LEVEL[n]`` is primarily intended to measure average efficiency arbiter atomic latency * Average atomic latency = ``TCC_PERF_SEL_EA0_WRREQ_ATOMIC_LEVEL`` divided by ``TCC_PERF_SEL_EA0_WRREQ_ATOMIC`` * ``TCC_EA0_RDREQ_LEVEL[n]`` is primarily intended to measure average efficiency arbiter read latency. * Average read latency = ``TCC_PERF_SEL_EA0_RDREQ_LEVEL`` divided by ``TCC_PERF_SEL_EA0_RDREQ`` * Stalls can occur regardless of the need for a read to be performed * Normally, stalls are measured exactly at one point in the pipeline however in the case of ``TCC_TAG_STALL[n]``, probes can stall the pipeline at a variety of places. There is no single point that can accurately measure the total stalls MI300 and MI200 Series derived metrics list ============================================================== .. csv-table:: :header: "Hardware counter", "Definition" "``ALUStalledByLDS``", "Percentage of GPU time ALU units are stalled due to the LDS input queue being full or the output queue not being ready (value range: 0% (optimal) to 100%)" "``FetchSize``", "Total kilobytes fetched from the video memory; measured with all extra fetches and any cache or memory effects taken into account" "``FlatLDSInsts``", "Average number of flat instructions that read from or write to LDS, run per work item (affected by flow control)" "``FlatVMemInsts``", "Average number of flat instructions that read from or write to the video memory, run per work item (affected by flow control). Includes flat instructions that read from or write to scratch" "``GDSInsts``", "Average number of global data share read or write instructions run per work item (affected by flow control)" "``GPUBusy``", "Percentage of time GPU is busy" "``L2CacheHit``", "Percentage of fetch, write, atomic, and other instructions that hit the data in L2 cache (value range: 0% (no hit) to 100% (optimal))" "``LDSBankConflict``", "Percentage of GPU time LDS is stalled by bank conflicts (value range: 0% (optimal) to 100%)" "``LDSInsts``", "Average number of LDS read or write instructions run per work item (affected by flow control). Excludes flat instructions that read from or write to LDS." "``MemUnitBusy``", "Percentage of GPU time the memory unit is active, which is measured with all extra fetches and writes and any cache or memory effects taken into account (value range: 0% to 100% (fetch-bound))" "``MemUnitStalled``", "Percentage of GPU time the memory unit is stalled (value range: 0% (optimal) to 100%)" "``MemWrites32B``", "Total number of effective 32B write transactions to the memory" "``TCA_BUSY_sum``", "Total number of cycles texture cache arbiter has a pending request, over all texture cache arbiter instances" "``TCA_CYCLE_sum``", "Total number of cycles over all texture cache arbiter instances" "``SALUBusy``", "Percentage of GPU time scalar ALU instructions are processed (value range: 0% to 100% (optimal))" "``SALUInsts``", "Average number of scalar ALU instructions run per work item (affected by flow control)" "``SFetchInsts``", "Average number of scalar fetch instructions from the video memory run per work item (affected by flow control)" "``VALUBusy``", "Percentage of GPU time vector ALU instructions are processed (value range: 0% to 100% (optimal))" "``VALUInsts``", "Average number of vector ALU instructions run per work item (affected by flow control)" "``VALUUtilization``", "Percentage of active vector ALU threads in a wave, where a lower number can mean either more thread divergence in a wave or that the work-group size is not a multiple of 64 (value range: 0%, 100% (optimal - no thread divergence))" "``VFetchInsts``", "Average number of vector fetch instructions from the video memory run per work-item (affected by flow control); excludes flat instructions that fetch from video memory" "``VWriteInsts``", "Average number of vector write instructions to the video memory run per work-item (affected by flow control); excludes flat instructions that write to video memory" "``Wavefronts``", "Total wavefronts" "``WRITE_REQ_32B``", "Total number of 32-byte effective memory writes" "``WriteSize``", "Total kilobytes written to the video memory; measured with all extra fetches and any cache or memory effects taken into account" "``WriteUnitStalled``", "Percentage of GPU time the write unit is stalled (value range: 0% (optimal) to 100%)" You can lower ``ALUStalledByLDS`` by reducing LDS bank conflicts or number of LDS accesses. You can lower ``MemUnitStalled`` by reducing the number or size of fetches and writes. ``MemUnitBusy`` includes the stall time (``MemUnitStalled``). Hardware counters by and over all texture addressing unit instances --------------------------------------------------------------------------------------------------------------- The following table shows the hardware counters *by* all texture addressing unit instances. .. csv-table:: :header: "Hardware counter", "Definition" "``TA_BUFFER_WAVEFRONTS_sum``", "Total number of buffer wavefronts processed" "``TA_BUFFER_READ_WAVEFRONTS_sum``", "Total number of buffer read wavefronts processed" "``TA_BUFFER_WRITE_WAVEFRONTS_sum``", "Total number of buffer write wavefronts processed" "``TA_BUFFER_ATOMIC_WAVEFRONTS_sum``", "Total number of buffer atomic wavefronts processed" "``TA_BUFFER_TOTAL_CYCLES_sum``", "Total number of buffer cycles (including read and write) issued to texture cache" "``TA_BUFFER_COALESCED_READ_CYCLES_sum``", "Total number of coalesced buffer read cycles issued to texture cache" "``TA_BUFFER_COALESCED_WRITE_CYCLES_sum``", "Total number of coalesced buffer write cycles issued to texture cache" "``TA_FLAT_READ_WAVEFRONTS_sum``", "Sum of flat opcode reads processed" "``TA_FLAT_WRITE_WAVEFRONTS_sum``", "Sum of flat opcode writes processed" "``TA_FLAT_WAVEFRONTS_sum``", "Total number of flat opcode wavefronts processed" "``TA_FLAT_ATOMIC_WAVEFRONTS_sum``", "Total number of flat opcode atomic wavefronts processed" "``TA_TOTAL_WAVEFRONTS_sum``", "Total number of wavefronts processed" The following table shows the hardware counters *over* all texture addressing unit instances. .. csv-table:: :header: "Hardware counter", "Definition" "``TA_ADDR_STALLED_BY_TC_CYCLES_sum``", "Total number of cycles texture addressing unit address path is stalled by texture cache" "``TA_ADDR_STALLED_BY_TD_CYCLES_sum``", "Total number of cycles texture addressing unit address path is stalled by texture data unit" "``TA_BUSY_avr``", "Average number of busy cycles" "``TA_BUSY_max``", "Maximum number of texture addressing unit busy cycles" "``TA_BUSY_min``", "Minimum number of texture addressing unit busy cycles" "``TA_DATA_STALLED_BY_TC_CYCLES_sum``", "Total number of cycles texture addressing unit data path is stalled by texture cache" "``TA_TA_BUSY_sum``", "Total number of texture addressing unit busy cycles" Hardware counters over all texture cache per channel instances --------------------------------------------------------------------------------------------------------------- .. csv-table:: :header: "Hardware counter", "Definition" "``TCC_ALL_TC_OP_WB_WRITEBACK_sum``", "Total number of writebacks due to all ``TC_OP`` writeback requests." "``TCC_ALL_TC_OP_INV_EVICT_sum``", "Total number of evictions due to all ``TC_OP`` invalidate requests." "``TCC_ATOMIC_sum``", "Total number of L2 cache atomic requests of all types." "``TCC_BUSY_avr``", "Average number of L2 cache busy cycles." "``TCC_BUSY_sum``", "Total number of L2 cache busy cycles." "``TCC_CC_REQ_sum``", "Total number of coherently cached requests." "``TCC_CYCLE_sum``", "Total number of L2 cache free running clocks." "``TCC_EA0_WRREQ_sum``", "Total number of 32-byte and 64-byte transactions going over the ``TC_EA0_wrreq`` interface. Atomics may travel over the same interface and are generally classified as write requests. This does not include probe commands." "``TCC_EA0_WRREQ_64B_sum``", "Total number of 64-byte transactions (write or `CMPSWAP`) going over the ``TC_EA0_wrreq`` interface." "``TCC_EA0_WR_UNCACHED_32B_sum``", "Total Number of 32-byte write or atomic going over the ``TC_EA0_wrreq`` interface due to uncached traffic. Note that coherently cached mtypes can produce uncached requests, and those are included in this. A 64-byte request is counted as 2." "``TCC_EA0_WRREQ_STALL_sum``", "Total Number of cycles a write request is stalled, over all instances." "``TCC_EA0_WRREQ_IO_CREDIT_STALL_sum``", "Total number of cycles an efficiency arbiter write request is stalled due to the interface running out of IO credits, over all instances." "``TCC_EA0_WRREQ_GMI_CREDIT_STALL_sum``", "Total number of cycles an efficiency arbiter write request is stalled due to the interface running out of GMI credits, over all instances." "``TCC_EA0_WRREQ_DRAM_CREDIT_STALL_sum``", "Total number of cycles an efficiency arbiter write request is stalled due to the interface running out of DRAM credits, over all instances." "``TCC_EA0_WRREQ_LEVEL_sum``", "Total number of efficiency arbiter write requests in flight." "``TCC_EA0_RDREQ_LEVEL_sum``", "Total number of efficiency arbiter read requests in flight." "``TCC_EA0_ATOMIC_sum``", "Total Number of 32-byte or 64-byte atomic requests going over the ``TC_EA0_wrreq`` interface." "``TCC_EA0_ATOMIC_LEVEL_sum``", "Total number of efficiency arbiter atomic requests in flight." "``TCC_EA0_RDREQ_sum``", "Total number of 32-byte or 64-byte read requests to efficiency arbiter." "``TCC_EA0_RDREQ_32B_sum``", "Total number of 32-byte read requests to efficiency arbiter." "``TCC_EA0_RD_UNCACHED_32B_sum``", "Total number of 32-byte efficiency arbiter reads due to uncached traffic." "``TCC_EA0_RDREQ_IO_CREDIT_STALL_sum``", "Total number of cycles there is a stall due to the read request interface running out of IO credits." "``TCC_EA0_RDREQ_GMI_CREDIT_STALL_sum``", "Total number of cycles there is a stall due to the read request interface running out of GMI credits." "``TCC_EA0_RDREQ_DRAM_CREDIT_STALL_sum``", "Total number of cycles there is a stall due to the read request interface running out of DRAM credits." "``TCC_EA0_RDREQ_DRAM_sum``", "Total number of 32-byte or 64-byte efficiency arbiter read requests to HBM." "``TCC_EA0_WRREQ_DRAM_sum``", "Total number of 32-byte or 64-byte efficiency arbiter write requests to HBM." "``TCC_HIT_sum``", "Total number of L2 cache hits." "``TCC_MISS_sum``", "Total number of L2 cache misses." "``TCC_NC_REQ_sum``", "Total number of non-coherently cached requests." "``TCC_NORMAL_WRITEBACK_sum``", "Total number of writebacks due to requests that are not writeback requests." "``TCC_NORMAL_EVICT_sum``", "Total number of evictions due to requests that are not invalidate or probe requests." "``TCC_PROBE_sum``", "Total number of probe requests." "``TCC_PROBE_ALL_sum``", "Total number of external probe requests with ``EA0_TCC_preq_all == 1``." "``TCC_READ_sum``", "Total number of L2 cache read requests (including compressed reads but not metadata reads)." "``TCC_REQ_sum``", "Total number of all types of L2 cache requests." "``TCC_RW_REQ_sum``", "Total number of coherently cached with write requests." "``TCC_STREAMING_REQ_sum``", "Total number of L2 cache streaming requests." "``TCC_TAG_STALL_sum``", "Total number of cycles the normal request pipeline in the tag is stalled for any reason." "``TCC_TOO_MANY_EA0_WRREQS_STALL_sum``", "Total number of cycles L2 cache is unable to send an efficiency arbiter write request due to it reaching its maximum capacity of pending efficiency arbiter write requests." "``TCC_UC_REQ_sum``", "Total number of uncached requests." "``TCC_WRITE_sum``", "Total number of L2 cache write requests." "``TCC_WRITEBACK_sum``", "Total number of lines written back to the main memory including writebacks of dirty lines and uncached write or atomic requests." "``TCC_WRREQ_STALL_max``", "Maximum number of cycles a write request is stalled." Hardware counters by, for, or over all texture cache per pipe instances ---------------------------------------------------------------------------------------------------------------- The following table shows the hardware counters *by* all texture cache per pipe instances. .. csv-table:: :header: "Hardware counter", "Definition" "``TCP_TA_TCP_STATE_READ_sum``", "Total number of state reads by ATCPPI" "``TCP_TOTAL_CACHE_ACCESSES_sum``", "Total number of vector L1d accesses (including hits and misses)" "``TCP_UTCL1_PERMISSION_MISS_sum``", "Total number of unified translation cache (L1) permission misses" "``TCP_UTCL1_REQUEST_sum``", "Total number of address translation requests to unified translation cache (L1)" "``TCP_UTCL1_TRANSLATION_MISS_sum``", "Total number of unified translation cache (L1) translation misses" "``TCP_UTCL1_TRANSLATION_HIT_sum``", "Total number of unified translation cache (L1) translation hits" The following table shows the hardware counters *for* all texture cache per pipe instances. .. csv-table:: :header: "Hardware counter", "Definition" "``TCP_TCC_READ_REQ_LATENCY_sum``", "Total vector L1d to L2 request latency over all wavefronts for reads and atomics with return" "``TCP_TCC_WRITE_REQ_LATENCY_sum``", "Total vector L1d to L2 request latency over all wavefronts for writes and atomics without return" "``TCP_TCP_LATENCY_sum``", "Total wave access latency to vector L1d over all wavefronts" The following table shows the hardware counters *over* all texture cache per pipe instances. .. csv-table:: :header: "Hardware counter", "Definition" "``TCP_ATOMIC_TAGCONFLICT_STALL_CYCLES_sum``", "Total number of cycles tag RAM conflict stalls on an atomic" "``TCP_GATE_EN1_sum``", "Total number of cycles vector L1d interface clocks are turned on" "``TCP_GATE_EN2_sum``", "Total number of cycles vector L1d core clocks are turned on" "``TCP_PENDING_STALL_CYCLES_sum``", "Total number of cycles vector L1d cache is stalled due to data pending from L2 Cache" "``TCP_READ_TAGCONFLICT_STALL_CYCLES_sum``", "Total number of cycles tag RAM conflict stalls on a read" "``TCP_TCC_ATOMIC_WITH_RET_REQ_sum``", "Total number of atomic requests to L2 cache with return" "``TCP_TCC_ATOMIC_WITHOUT_RET_REQ_sum``", "Total number of atomic requests to L2 cache without return" "``TCP_TCC_CC_READ_REQ_sum``", "Total number of coherently cached read requests to L2 cache" "``TCP_TCC_CC_WRITE_REQ_sum``", "Total number of coherently cached write requests to L2 cache" "``TCP_TCC_CC_ATOMIC_REQ_sum``", "Total number of coherently cached atomic requests to L2 cache" "``TCP_TCC_NC_READ_REQ_sum``", "Total number of non-coherently cached read requests to L2 cache" "``TCP_TCC_NC_WRITE_REQ_sum``", "Total number of non-coherently cached write requests to L2 cache" "``TCP_TCC_NC_ATOMIC_REQ_sum``", "Total number of non-coherently cached atomic requests to L2 cache" "``TCP_TCC_READ_REQ_sum``", "Total number of read requests to L2 cache" "``TCP_TCC_RW_READ_REQ_sum``", "Total number of coherently cached with write read requests to L2 cache" "``TCP_TCC_RW_WRITE_REQ_sum``", "Total number of coherently cached with write write requests to L2 cache" "``TCP_TCC_RW_ATOMIC_REQ_sum``", "Total number of coherently cached with write atomic requests to L2 cache" "``TCP_TCC_UC_READ_REQ_sum``", "Total number of uncached read requests to L2 cache" "``TCP_TCC_UC_WRITE_REQ_sum``", "Total number of uncached write requests to L2 cache" "``TCP_TCC_UC_ATOMIC_REQ_sum``", "Total number of uncached atomic requests to L2 cache" "``TCP_TCC_WRITE_REQ_sum``", "Total number of write requests to L2 cache" "``TCP_TCR_TCP_STALL_CYCLES_sum``", "Total number of cycles texture cache router stalls vector L1d" "``TCP_TD_TCP_STALL_CYCLES_sum``", "Total number of cycles texture data unit stalls vector L1d" "``TCP_TOTAL_ACCESSES_sum``", "Total number of vector L1d accesses" "``TCP_TOTAL_READ_sum``", "Total number of vector L1d read accesses" "``TCP_TOTAL_WRITE_sum``", "Total number of vector L1d write accesses" "``TCP_TOTAL_ATOMIC_WITH_RET_sum``", "Total number of vector L1d atomic requests with return" "``TCP_TOTAL_ATOMIC_WITHOUT_RET_sum``", "Total number of vector L1d atomic requests without return" "``TCP_TOTAL_WRITEBACK_INVALIDATES_sum``", "Total number of vector L1d writebacks and invalidates" "``TCP_VOLATILE_sum``", "Total number of L1 volatile pixels or buffers from texture addressing unit" "``TCP_WRITE_TAGCONFLICT_STALL_CYCLES_sum``", "Total number of cycles tag RAM conflict stalls on a write" Hardware counter over all texture data unit instances -------------------------------------------------------- .. csv-table:: :header: "Hardware counter", "Definition" "``TD_ATOMIC_WAVEFRONT_sum``", "Total number of atomic wavefront instructions" "``TD_COALESCABLE_WAVEFRONT_sum``", "Total number of coalescable wavefronts according to texture addressing unit" "``TD_LOAD_WAVEFRONT_sum``", "Total number of wavefront instructions (read, write, atomic)" "``TD_SPI_STALL_sum``", "Total number of cycles texture data unit is stalled by shader processor input" "``TD_STORE_WAVEFRONT_sum``", "Total number of write wavefront instructions" "``TD_TC_STALL_sum``", "Total number of cycles texture data unit is stalled waiting for texture cache data" "``TD_TD_BUSY_sum``", "Total number of texture data unit busy cycles while it is processing or waiting for data" --- .. meta:: :description: MI355 Series performance counters and metrics :keywords: MI355, MI355X, MI3XX *********************************** MI350 Series performance counters *********************************** This topic lists and describes the hardware performance counters and derived metrics available on the AMD Instinct MI350 and MI355 GPUs. These counters are available for profiling using `ROCprofiler-SDK `_ and `ROCm Compute Profiler `_. The following sections list the performance counters based on the IP blocks. Command processor packet processor counters (CPC) ================================================== .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - CPC_ALWAYS_COUNT - Always count. * - CPC_ADC_VALID_CHUNK_NOT_AVAIL - ADC valid chunk is not available when dispatch walking is in progress in the multi-xcc mode. * - CPC_ADC_DISPATCH_ALLOC_DONE - ADC dispatch allocation is done. * - CPC_ADC_VALID_CHUNK_END - ADC crawler's valid chunk end in the multi-xcc mode. * - CPC_SYNC_FIFO_FULL_LEVEL - SYNC FIFO full last cycles. * - CPC_SYNC_FIFO_FULL - SYNC FIFO full times. * - CPC_GD_BUSY - ADC busy. * - CPC_TG_SEND - ADC thread group send. * - CPC_WALK_NEXT_CHUNK - ADC walking next valid chunk in the multi-xcc mode. * - CPC_STALLED_BY_SE0_SPI - ADC CSDATA stalled by SE0SPI. * - CPC_STALLED_BY_SE1_SPI - ADC CSDATA stalled by SE1SPI. * - CPC_STALLED_BY_SE2_SPI - ADC CSDATA stalled by SE2SPI. * - CPC_STALLED_BY_SE3_SPI - ADC CSDATA stalled by SE3SPI. * - CPC_LTE_ALL - CPC sync counter LteAll. Only Master XCD manages LteAll. * - CPC_SYNC_WRREQ_FIFO_BUSY - CPC sync counter request FIFO is not empty. * - CPC_CANE_BUSY - CPC CANE bus is busy, which indicates the presence of inflight sync counter requests. * - CPC_CANE_STALL - CPC sync counter sending is stalled by CANE. Shader pipe interpolators (SPI) counters ========================================= .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - SPI_CS0_WINDOW_VALID - Clock count enabled by PIPE0 perfcounter_start event. * - SPI_CS0_BUSY - Number of clocks with outstanding waves for PIPE0 (SPI or SH). * - SPI_CS0_NUM_THREADGROUPS - Number of thread groups launched for PIPE0. * - SPI_CS0_CRAWLER_STALL - Number of clocks when PIPE0 event or wave order FIFO is full. * - SPI_CS0_EVENT_WAVE - Number of PIPE0 events and waves. * - SPI_CS0_WAVE - Number of PIPE0 waves. * - SPI_CS1_WINDOW_VALID - Clock count enabled by PIPE1 perfcounter_start event. * - SPI_CS1_BUSY - Number of clocks with outstanding waves for PIPE1 (SPI or SH). * - SPI_CS1_NUM_THREADGROUPS - Number of thread groups launched for PIPE1. * - SPI_CS1_CRAWLER_STALL - Number of clocks when PIPE1 event or wave order FIFO is full. * - SPI_CS1_EVENT_WAVE - Number of PIPE1 events and waves. * - SPI_CS1_WAVE - Number of PIPE1 waves. * - SPI_CS2_WINDOW_VALID - Clock count enabled by PIPE2 perfcounter_start event. * - SPI_CS2_BUSY - Number of clocks with outstanding waves for PIPE2 (SPI or SH). * - SPI_CS2_NUM_THREADGROUPS - Number of thread groups launched for PIPE2. * - SPI_CS2_CRAWLER_STALL - Number of clocks when PIPE2 event or wave order FIFO is full. * - SPI_CS2_EVENT_WAVE - Number of PIPE2 events and waves. * - SPI_CS2_WAVE - Number of PIPE2 waves. * - SPI_CS3_WINDOW_VALID - Clock count enabled by PIPE3 perfcounter_start event. * - SPI_CS3_BUSY - Number of clocks with outstanding waves for PIPE3 (SPI or SH). * - SPI_CS3_NUM_THREADGROUPS - Number of thread groups launched for PIPE3. * - SPI_CS3_CRAWLER_STALL - Number of clocks when PIPE3 event or wave order FIFO is full. * - SPI_CS3_EVENT_WAVE - Number of PIPE3 events and waves. * - SPI_CS3_WAVE - Number of PIPE3 waves. * - SPI_CSQ_P0_Q0_OCCUPANCY - Sum of occupancy info for PIPE0 Queue0. * - SPI_CSQ_P0_Q1_OCCUPANCY - Sum of occupancy info for PIPE0 Queue1. * - SPI_CSQ_P0_Q2_OCCUPANCY - Sum of occupancy info for PIPE0 Queue2. * - SPI_CSQ_P0_Q3_OCCUPANCY - Sum of occupancy info for PIPE0 Queue3. * - SPI_CSQ_P0_Q4_OCCUPANCY - Sum of occupancy info for PIPE0 Queue4. * - SPI_CSQ_P0_Q5_OCCUPANCY - Sum of occupancy info for PIPE0 Queue5. * - SPI_CSQ_P0_Q6_OCCUPANCY - Sum of occupancy info for PIPE0 Queue6. * - SPI_CSQ_P0_Q7_OCCUPANCY - Sum of occupancy info for PIPE0 Queue7. * - SPI_CSQ_P1_Q0_OCCUPANCY - Sum of occupancy info for PIPE1 Queue0. * - SPI_CSQ_P1_Q1_OCCUPANCY - Sum of occupancy info for PIPE1 Queue1. * - SPI_CSQ_P1_Q2_OCCUPANCY - Sum of occupancy info for PIPE1 Queue2. * - SPI_CSQ_P1_Q3_OCCUPANCY - Sum of occupancy info for PIPE1 Queue3. * - SPI_CSQ_P1_Q4_OCCUPANCY - Sum of occupancy info for PIPE1 Queue4. * - SPI_CSQ_P1_Q5_OCCUPANCY - Sum of occupancy info for PIPE1 Queue5. * - SPI_CSQ_P1_Q6_OCCUPANCY - Sum of occupancy info for PIPE1 Queue6. * - SPI_CSQ_P1_Q7_OCCUPANCY - Sum of occupancy info for PIPE1 Queue7. * - SPI_CSQ_P2_Q0_OCCUPANCY - Sum of occupancy info for PIPE2 Queue0. * - SPI_CSQ_P2_Q1_OCCUPANCY - Sum of occupancy info for PIPE2 Queue1. * - SPI_CSQ_P2_Q2_OCCUPANCY - Sum of occupancy info for PIPE2 Queue2. * - SPI_CSQ_P2_Q3_OCCUPANCY - Sum of occupancy info for PIPE2 Queue3. * - SPI_CSQ_P2_Q4_OCCUPANCY - Sum of occupancy info for PIPE2 Queue4. * - SPI_CSQ_P2_Q5_OCCUPANCY - Sum of occupancy info for PIPE2 Queue5. * - SPI_CSQ_P2_Q6_OCCUPANCY - Sum of occupancy info for PIPE2 Queue6. * - SPI_CSQ_P2_Q7_OCCUPANCY - Sum of occupancy info for PIPE2 Queue7. * - SPI_CSQ_P3_Q0_OCCUPANCY - Sum of occupancy info for PIPE3 Queue0. * - SPI_CSQ_P3_Q1_OCCUPANCY - Sum of occupancy info for PIPE3 Queue1. * - SPI_CSQ_P3_Q2_OCCUPANCY - Sum of occupancy info for PIPE3 Queue2. * - SPI_CSQ_P3_Q3_OCCUPANCY - Sum of occupancy info for PIPE3 Queue3. * - SPI_CSQ_P3_Q4_OCCUPANCY - Sum of occupancy info for PIPE3 Queue4. * - SPI_CSQ_P3_Q5_OCCUPANCY - Sum of occupancy info for PIPE3 Queue5. * - SPI_CSQ_P3_Q6_OCCUPANCY - Sum of occupancy info for PIPE3 Queue6. * - SPI_CSQ_P3_Q7_OCCUPANCY - Sum of occupancy info for PIPE3 Queue7. * - SPI_CSQ_P0_OCCUPANCY - Sum of occupancy info for all PIPE0 queues. * - SPI_CSQ_P1_OCCUPANCY - Sum of occupancy info for all PIPE1 queues. * - SPI_CSQ_P2_OCCUPANCY - Sum of occupancy info for all PIPE2 queues. * - SPI_CSQ_P3_OCCUPANCY - Sum of occupancy info for all PIPE3 queues. * - SPI_VWC0_VDATA_VALID_WR - Number of clocks VGPR bus_0 writes VGPRs. * - SPI_VWC1_VDATA_VALID_WR - Number of clocks VGPR bus_1 writes VGPRs. * - SPI_CSC_WAVE_CNT_BUSY - Number of cycles when there is any wave in the pipe. Compute unit (SQ) counters =========================== .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - SQ_INSTS_VALU_MFMA_F6F4 - Number of VALU V_MFMA_*_F6F4 instructions. * - SQ_INSTS_VALU_MFMA_MOPS_F6F4 - Number of VALU matrix with the performed math operations (add or mul) divided by 512, assuming a full EXEC mask of F6 or F4 data type. * - SQ_ACTIVE_INST_VALU2 - Number of quad-cycles when two VALU instructions are issued (per-simd, nondeterministic). * - SQ_INSTS_LDS_LOAD - Number of LDS load instructions issued (per-simd, emulated). * - SQ_INSTS_LDS_STORE - Number of LDS store instructions issued (per-simd, emulated). * - SQ_INSTS_LDS_ATOMIC - Number of LDS atomic instructions issued (per-simd, emulated). * - SQ_INSTS_LDS_LOAD_BANDWIDTH - Total number of 64-bytes loaded (instrSize * CountOnes(EXEC))/64 (per-simd, emulated). * - SQ_INSTS_LDS_STORE_BANDWIDTH - Total number of 64-bytes written (instrSize * CountOnes(EXEC))/64 (per-simd, emulated). * - SQ_INSTS_LDS_ATOMIC_BANDWIDTH - Total number of 64-bytes atomic (instrSize * CountOnes(EXEC))/64 (per-simd, emulated). * - SQ_INSTS_VALU_FLOPS_FP16 - Counts FLOPS per instruction on float 16 excluding MFMA/SMFMA. * - SQ_INSTS_VALU_FLOPS_FP32 - Counts FLOPS per instruction on float 32 excluding MFMA/SMFMA. * - SQ_INSTS_VALU_FLOPS_FP64 - Counts FLOPS per instruction on float 64 excluding MFMA/SMFMA. * - SQ_INSTS_VALU_FLOPS_FP16_TRANS - Counts FLOPS per instruction on float 16 trans excluding MFMA/SMFMA. * - SQ_INSTS_VALU_FLOPS_FP32_TRANS - Counts FLOPS per instruction on float 32 trans excluding MFMA/SMFMA. * - SQ_INSTS_VALU_FLOPS_FP64_TRANS - Counts FLOPS per instruction on float 64 trans excluding MFMA/SMFMA. * - SQ_INSTS_VALU_IOPS - Counts OPS per instruction on integer or unsigned or bit data (per-simd, emulated). * - SQ_LDS_DATA_FIFO_FULL - Number of cycles LDS data FIFO is full (nondeterministic, unwindowed). * - SQ_LDS_CMD_FIFO_FULL - Number of cycles LDS command FIFO is full (nondeterministic, unwindowed). * - SQ_VMEM_TA_ADDR_FIFO_FULL - Number of cycles texture requests are stalled due to full address FIFO in TA (nondeterministic, unwindowed). * - SQ_VMEM_TA_CMD_FIFO_FULL - Number of cycles texture requests are stalled due to full cmd FIFO in TA (nondeterministic, unwindowed). * - SQ_VMEM_WR_TA_DATA_FIFO_FULL - Number of cycles texture writes are stalled due to full data FIFO in TA (nondeterministic, unwindowed). * - SQC_ICACHE_MISSES_DUPLICATE - Number of duplicate misses (access to a non-resident, miss pending CL) (per-SQ, per-Bank, nondeterministic). * - SQC_DCACHE_MISSES_DUPLICATE - Number of duplicate misses (access to a non-resident, miss pending CL) (per-SQ, per-Bank, nondeterministic). Texture addressing (TA) unit counters ====================================== .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - TA_BUFFER_READ_LDS_WAVEFRONTS - Number of buffer read wavefronts for LDS return processed by the TA. * - TA_FLAT_READ_LDS_WAVEFRONTS - Number of flat opcode reads for LDS return processed by the TA. Texture data (TD) unit counters ================================ .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - TD_WRITE_ACKT_WAVEFRONT - Number of write acknowledgments, sent to SQ and not to SP. * - TD_TD_SP_TRAFFIC - Number of times this TD sends data to the SP. Texture cache per pipe (TCP) counters ====================================== .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - TCP_TCP_TA_ADDR_STALL_CYCLES - TCP stalls TA addr interface. * - TCP_TCP_TA_DATA_STALL_CYCLES - TCP stalls TA data interface. Now windowed. * - TCP_LFIFO_STALL_CYCLES - Memory latency FIFOs full stall. * - TCP_RFIFO_STALL_CYCLES - Memory Request FIFOs full stall. * - TCP_TCR_RDRET_STALL - Write into cache stalled by read return from TCR. * - TCP_PENDING_STALL_CYCLES - Stall due to data pending from L2. * - TCP_UTCL1_SERIALIZATION_STALL - Total number of stalls caused due to serializing translation requests through the UTCL1. * - TCP_UTCL1_THRASHING_STALL - Stall caused by thrashing feature in any probe. Lacks accuracy when the stall signal overlaps between probe0 and probe1, which is worse with MECO of thrashing deadlock. Some probe0 events could miss being counted in with MECO on. This perf count provides a rough thrashing estimate. * - TCP_UTCL1_TRANSLATION_MISS_UNDER_MISS - Translation miss_under_miss. * - TCP_UTCL1_STALL_INFLIGHT_MAX - Total UTCL1 stalls due to inflight counter saturation. * - TCP_UTCL1_STALL_LRU_INFLIGHT - Total UTCL1 stalls due to LRU cache line with inflight traffic. * - TCP_UTCL1_STALL_MULTI_MISS - Total UTCL1 stalls due to arbitrated multiple misses. * - TCP_UTCL1_LFIFO_FULL - Total UTCL1 and UTCL2 latency, which hides FIFO full cycles. * - TCP_UTCL1_STALL_LFIFO_NOT_RES - Total UTCL1 stalls due to UTCL2 latency, which hides FIFO output (not resident). * - TCP_UTCL1_STALL_UTCL2_REQ_OUT_OF_CREDITS - Total UTCL1 stalls due to UTCL2_req being out of credits. * - TCP_CLIENT_UTCL1_INFLIGHT - The sum of inflight client to UTCL1 requests per cycle. * - TCP_TAGRAM0_REQ - Total L2 requests mapping to TagRAM 0 from this TCP to all TCCs. * - TCP_TAGRAM1_REQ - Total L2 requests mapping to TagRAM 1 from this TCP to all TCCs. * - TCP_TAGRAM2_REQ - Total L2 requests mapping to TagRAM 2 from this TCP to all TCCs. * - TCP_TAGRAM3_REQ - Total L2 requests mapping to TagRAM 3 from this TCP to all TCCs. * - TCP_TCP_LATENCY - Total TCP wave latency (from the first clock of wave entering to the first clock of wave leaving). Divide by TA_TCP_STATE_READ to find average wave latency. * - TCP_TCC_READ_REQ_LATENCY - Total TCP to TCC request latency for reads and atomics with return. Not Windowed. * - TCP_TCC_WRITE_REQ_LATENCY - Total TCP to TCC request latency for writes and atomics without return. Not Windowed. * - TCP_TCC_WRITE_REQ_HOLE_LATENCY - Total TCP req to TCC hole latency for writes and atomics. Not Windowed. Texture cache per channel (TCC) counters ========================================= .. list-table:: :header-rows: 1 * - Hardware counter - Definition * - TCC_READ_SECTORS - Total number of 32B data sectors in read requests. * - TCC_WRITE_SECTORS - Total number of 32B data sectors in write requests. * - TCC_ATOMIC_SECTORS - Total number of 32B data sectors in atomic requests. * - TCC_BYPASS_REQ - Number of bypass requests. This is measured at the tag block. * - TCC_LATENCY_FIFO_FULL - Number of cycles when the latency FIFO is full. * - TCC_SRC_FIFO_FULL - Number of cycles when the SRC FIFO is assumed to be full as measured at the IB block. * - TCC_EA0_RDREQ_64B - Number of 64-byte TCC/EA read requests. * - TCC_EA0_RDREQ_128B - Number of 128-byte TCC/EA read requests. * - TCC_IB_REQ - Number of requests through the IB. This measures the number of raw requests from graphics clients to this TCC. * - TCC_IB_STALL - Number of cycles when the IB output is stalled. * - TCC_EA0_WRREQ_WRITE_DRAM - Number of TCC/EA write requests (32-byte or 64-byte) destined for DRAM (MC). * - TCC_EA0_WRREQ_ATOMIC_DRAM - Number of TCC/EA atomic requests (32-byte or 64-byte) destined for DRAM (MC). * - TCC_EA0_RDREQ_DRAM_32B - Number of 32-byte TCC/EA read requests due to DRAM traffic. One 64-byte request is counted as two and one 128-byte as four. * - TCC_EA0_RDREQ_GMI_32B - Number of 32-byte TCC/EA read requests due to GMI traffic. One 64-byte request is counted as two and one 128-byte as four. * - TCC_EA0_RDREQ_IO_32B - Number of 32-byte TCC/EA read requests due to IO traffic. One 64-byte request is counted as two and one 128-byte as four. * - TCC_EA0_WRREQ_WRITE_DRAM_32B - Number of 32-byte TCC/EA write requests due to DRAM traffic. One 64-byte request is counted as two. * - TCC_EA0_WRREQ_ATOMIC_DRAM_32B - Number of 32-byte TCC/EA atomic requests due to DRAM traffic. One 64-byte request is counted as two. * - TCC_EA0_WRREQ_WRITE_GMI_32B - Number of 32-byte TCC/EA write requests due to GMI traffic. One 64-byte request is counted as two. * - TCC_EA0_WRREQ_ATOMIC_GMI_32B - Number of 32-byte TCC/EA atomic requests due to GMI traffic. One 64-byte request is counted as two. * - TCC_EA0_WRREQ_WRITE_IO_32B - Number of 32-byte TCC/EA write requests due to IO traffic. One 64-byte request is counted as two. * - TCC_EA0_WRREQ_ATOMIC_IO_32B - Number of 32-byte TCC/EA atomic requests due to IO traffic. One 64-byte request is counted as two. --- .. meta:: :description: Learn about BAR configuration in AMD GPUs and ways to troubleshoot physical addressing limit :keywords: BAR memory, MMIO, GPU memory, Physical Addressing Limit, AMD, ROCm ************************************** Troubleshoot BAR access limitation ************************************** Direct Memory Access (DMA) to PCIe devices using Base Address Registers (BARs) can be restricted due to physical addressing limits. These restrictions can result in data access failures between the system components. Peer-to-peer (P2P) DMA is used to access resources such as registers and memory between devices. PCIe devices need memory-mapped input/output (MMIO) space for DMA, and these MMIO spaces are defined in the PCIe BARs. These BARs are a set of 32-bit or 64-bit registers that are used to define the resources that PCIe devices provide. The CPU and other system devices also use these to access the resources of the PCIe devices. P2P DMA only works when one device can directly access the local BAR memory of another. If the memory address of a BAR memory exceeds the physical addressing limit of a device, the device will not be able to access that BAR. This could be the device's own BAR or the BAR of another device in the system. If the BAR memory exceeds than the physical addressing limit of the device, the device will not be able to access the remote BAR. To handle any BAR access issues that might occur, you need to be aware of the physical address limitations of the devices and understand the :ref:`BAR configuration of AMD GPUs `. This information is important when setting up additional MMIO apertures for PCIe devices in the system's physical address space. Handling physical address limitation ============================================= When a system boots, the system BIOS allocates the physical address space for the components in the system, including system memory and MMIO apertures. On modern 64-bit platforms, there are generally two or more MMIO apertures: one located below 4 GB of physical address space for 32-bit compatibility, and one or more above 4 GB for devices needing more space. You can control the memory address of the high MMIO aperture from the system BIOS configuration options. This lets you configure the additional MMIO space to align with the physical addressing limit and allows P2P DMA between the devices. For example, if a PCIe device is limited to 44-bit of physical addressing, you should ensure that the MMIO aperture is set below 44-bit in the system physical address space. There are two ways to handle this: * Ensure that the high MMIO aperture is within the physical addressing limits of the devices in the system. For example, if the devices have a 44-bit physical addressing limit, set the ``MMIO High Base`` and ``MMIO High size`` options in the BIOS such that the aperture is within the 44-bit address range, and ensure that the ``Above 4G Decoding`` option is Enabled. * Enable the Input-Output Memory Management Unit (IOMMU). When the IOMMU is enabled in non-passthrough mode, it will create a virtual I/O address space for each device on the system. It also ensures that all virtual addresses created in that space are within the physical addressing limits of the device. For more information on IOMMU, see `Input-Output Memory Management Unit (IOMMU) `_. .. _bar-configuration: BAR configuration for AMD GPUs ================================================ The following table shows how the BARs are configured for AMD GPUs. .. list-table:: :widths: 25 25 50 :header-rows: 1 * - BAR Type - Value - Description * - BAR0-1 registers - 64-bit, Prefetchable, GPU memory - 8 GB or 16 GB depending on GPU. Set to less than 2^44 to support P2P access from other GPUs with a 44-bit physical address limit. Prefetchable memory enables faster read operation for high-performance computing (HPC) by fetching the contiguous data from the same data source even before requested as an anticipation of a future request. * - BAR2-3 registers - 64-bit, Prefetchable, Doorbell - Set to less than 2^44 to support P2P access from other GPUs with a 44-bit physical address limit. As a Doorbell BAR, it indicates to the GPU that a new operation is in its queue to be processed. * - BAR4 register - Optional - Not a boot device * - BAR5 register - 32-bit, Non-prefetchable, MMIO - Is set to less than 4 GB. Example of BAR usage on AMD GPUs ------------------------------------- Following is an example configuration of BARs set by the system BIOS on GFX8 GPUs with the 40-bit physical addressing limit: .. code:: shell 11:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Fiji [Radeon R9 FURY / NANO Series] (rev c1) Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 0b35 Flags: bus master, fast devsel, latency 0, IRQ 119 Memory at bf40000000 (64-bit, prefetchable) [size=256M] Memory at bf50000000 (64-bit, prefetchable) [size=2M] I/O ports at 3000 [size=256] Memory at c7400000 (32-bit, non-prefetchable) [size=256K] Expansion ROM at c7440000 [disabled] [size=128K] Details of the BARs configured in the example are: **GPU Frame Buffer BAR:** ``Memory at bf40000000 (64-bit, prefetchable) [size=256M]`` The size of the BAR in the example is 256 MB. Generally, it will be the size of the GPU memory (typically 4 GB+). Depending upon the physical address limit and generation of AMD GPUs, the BAR can be set below 2^40, 2^44, or 2^48. **Doorbell BAR:** ``Memory at bf50000000 (64-bit, prefetchable) [size=2M]`` The size of the BAR should typically be less than 10 MB for this generation of GPUs and has been set to 2 MB in the example. This BAR is placed less than 2^40 to allow peer-to-peer access from other generations of AMD GPUs. **I/O BAR:** ``I/O ports at 3000 [size=256]`` This is for legacy VGA and boot device support. Because the GPUs used are not connected to a display (VGA devices), this is not a concern, even if it isn't set up in the system BIOS. **MMIO BAR:** ``Memory at c7400000 (32-bit, non-prefetchable) [size=256K]`` The AMD Driver requires this to access the configuration registers. Since the reminder of the BAR available is only 1 DWORD (32-bit), this is set less than 4 GB. In the example, it is fixed at 256 KB. **Expansion ROM:** ``Expansion ROM at c7440000 [disabled] [size=128K]`` This is required by the AMD Driver to access the GPU video-BIOS. In the example, it is fixed at 128 KB. --- .. meta:: :description: Build ROCm from source :keywords: build ROCm, source, ROCm source, ROCm, repo, make, makefile .. _building-rocm: ************************************************************* Build ROCm from source ************************************************************* ROCm is an open-source stack from which you can build from source code. The source code is available from ``__. The general steps to build ROCm are: #. Clone the ROCm source code #. Prepare the build environment #. Run the build command Because the ROCm stack is constantly evolving, the most current instructions are stored with the source code in GitHub. For detailed build instructions, see `Getting and Building ROCm from Source `_. --- .. meta:: :description: How to install deep learning frameworks for ROCm :keywords: deep learning, frameworks, ROCm, install, PyTorch, TensorFlow, JAX, MAGMA, DeepSpeed, ML, AI ********************************** Deep learning frameworks for ROCm ********************************** Deep learning frameworks provide environments for machine learning, training, fine-tuning, inference, and performance optimization. ROCm offers a complete ecosystem for developing and running deep learning applications efficiently. It also provides ROCm-compatible versions of popular frameworks and libraries, such as PyTorch, TensorFlow, JAX, and others. The AMD ROCm organization actively contributes to open-source development and collaborates closely with framework organizations. This collaboration ensures that framework-specific optimizations effectively leverage AMD GPUs. The table below summarizes information about ROCm-enabled deep learning frameworks. It includes details on ROCm compatibility and third-party tool support, installation steps and options, and links to GitHub resources. For a complete list of supported framework versions on ROCm, see the :doc:`Compatibility matrix <../compatibility/compatibility-matrix>` topic. .. list-table:: :header-rows: 1 :widths: 5 3 6 3 * - Framework - Installation - Installation options - GitHub * - `PyTorch `__ - .. raw:: html - - `Docker image `__ - `Wheels package `__ - `ROCm Base Docker image `__ - `Upstream Docker file `__ - .. raw:: html * - `TensorFlow `__ - .. raw:: html - - `Docker image `__ - `Wheels package `__ - .. raw:: html * - `JAX `__ - .. raw:: html - - `Docker image `__ - .. raw:: html * - `verl `__ - .. raw:: html - - `Docker image `__ - .. raw:: html * - `Stanford Megatron-LM `__ - .. raw:: html - - `Docker image `__ - .. raw:: html * - `DGL `__ - .. raw:: html - - `Docker image `__ - `Wheels package `__ - .. raw:: html * - `Megablocks `__ - .. raw:: html - - `Docker image `__ - .. raw:: html * - `Ray `__ - .. raw:: html - - `Docker image `__ - `Wheels package `__ - `ROCm Base Docker image `__ - .. raw:: html * - `llama.cpp `__ - .. raw:: html - - `Docker image `__ - `ROCm Base Docker image `__ - .. raw:: html * - `FlashInfer `__ - .. raw:: html - - `Docker image `__ - `ROCm Base Docker image `__ - .. raw:: html Learn how to use your ROCm deep learning environment for training, fine-tuning, inference, and performance optimization through the following guides. * :doc:`rocm-for-ai/index` * :doc:`Use ROCm for training ` * :doc:`Use ROCm for fine-tuning LLMs ` * :doc:`Use ROCm for AI inference ` * :doc:`Use ROCm for AI inference optimization ` --- .. meta:: :description: How to configure MI300X GPUs to fully leverage their capabilities and achieve optimal performance. :keywords: ROCm, AI, machine learning, MI300X, LLM, usage, tutorial, optimization, tuning ************************************** AMD Instinct MI300X performance guides ************************************** The following performance guides provide essential guidance on the necessary steps to properly `configure your system for AMD Instinct™ MI300X GPUs `_. They include detailed instructions on system settings and application :doc:`workload tuning ` to help you leverage the maximum capabilities of these GPUs and achieve superior performance. * `AMD Instinct MI300X system optimization `__ covers essential system settings and system management practices to configure your AMD Instinct MI300X system for performance. * :doc:`/how-to/rocm-for-ai/inference-optimization/workload` covers steps to optimize the performance of AMD Instinct MI300X Series GPUs for HPC and deep learning operations. * :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm` introduces a preconfigured environment for LLM inference, designed to help you test performance with popular models on AMD Instinct MI300X Series GPUs. --- :orphan: .. meta:: :description: Programming guide :keywords: HIP, programming guide, heterogeneous programming, AMD GPU programming .. _hip-programming-guide: ******************************************************************************** Programming guide ******************************************************************************** ROCm provides a robust environment for heterogeneous programs running on CPUs and AMD GPUs. ROCm supports various programming languages and frameworks to help developers access the power of AMD GPUs. The natively supported programming languages are HIP (Heterogeneous-Compute Interface for Portability) and OpenCL, but HIP bindings are available for Python and Fortran. HIP is an API based on C++ that provides a runtime and kernel language for GPU programming and is the essential ROCm programming language. HIP is also designed to be a marshalling language, allowing code written for NVIDIA CUDA to be easily ported to run on AMD GPUs. Developers can use HIP to write kernels that execute on AMD GPUs while maintaining compatibility with CUDA-based systems. OpenCL (Open Computing Language) is an open standard for cross-platform, parallel programming of diverse processors. ROCm supports OpenCL for developers who want to use standard frameworks across different hardware platforms, including CPUs, GPUs, and APUs. For more information, see `OpenCL `_. Python bindings can be found at https://github.com/ROCm/hip-python. Python is popular in AI and machine learning applications due to available frameworks like TensorFlow and PyTorch. Fortran bindings can be found at https://github.com/ROCm/hipfort. It enables scientific, academic, and legacy applications, particularly those in high-performance computing, to run on AMD GPUs via HIP. For a complete description of the HIP programming language, see the :doc:`HIP programming guide`. --- .. meta:: :description: How to fine-tune models with ROCm :keywords: ROCm, LLM, fine-tuning, inference, usage, tutorial, deep learning, PyTorch, TensorFlow, JAX ************************* Fine-tuning and inference ************************* Fine-tuning using ROCm involves leveraging AMD's GPU-accelerated :doc:`libraries ` and :doc:`tools ` to optimize and train deep learning models. ROCm provides a comprehensive ecosystem for deep learning development, including open-source libraries for optimized deep learning operations and ROCm-aware versions of :doc:`deep learning frameworks <../../deep-learning-rocm>` such as PyTorch, TensorFlow, and JAX. Single-accelerator systems, such as a machine equipped with a single GPU, are commonly used for smaller-scale deep learning tasks, including fine-tuning pre-trained models and running inference on moderately sized datasets. See :doc:`single-gpu-fine-tuning-and-inference`. Multi-accelerator systems, on the other hand, consist of multiple GPUs working in parallel. These systems are typically used in LLMs and other large-scale deep learning tasks where performance, scalability, and the handling of massive datasets are crucial. See :doc:`multi-gpu-fine-tuning-and-inference`. --- .. meta:: :description: How to fine-tune LLMs with ROCm :keywords: ROCm, LLM, fine-tuning, usage, tutorial, GPUs, Llama, accelerators ******************************************* Use ROCm for fine-tuning LLMs ******************************************* Fine-tuning is an essential technique in machine learning, where a pre-trained model, typically trained on a large-scale dataset, is further refined to achieve better performance and adapt to a particular task or dataset of interest. With AMD GPUs, the fine-tuning process benefits from the parallel processing capabilities and efficient resource management, ultimately leading to improved performance and faster model adaptation to the target domain. The ROCm™ software platform helps you optimize this fine-tuning process by supporting various optimization techniques tailored for AMD GPUs. It empowers the fine-tuning of large language models, making them accessible and efficient for specialized tasks. ROCm supports the broader AI ecosystem to ensure seamless integration with open frameworks, models, and tools. Throughout the following topics, this guide discusses the goals and :ref:`challenges of fine-tuning a large language model ` like Llama 2. In the sections that follow, you'll find practical guides on libraries and tools to accelerate your fine-tuning. The AI Developer Hub contains `AMD ROCm tutorials `_ for training, fine-tuning, and inference. It leverages popular machine learning frameworks on AMD GPUs. - :doc:`Conceptual overview of fine-tuning LLMs ` - :doc:`Fine-tuning and inference ` using a :doc:`single-accelerator ` or :doc:`multi-accelerator ` system. --- .. meta:: :description: Model fine-tuning and inference on a multi-GPU system :keywords: ROCm, LLM, fine-tuning, usage, tutorial, multi-GPU, distributed, inference, accelerators, PyTorch, HuggingFace, torchtune ***************************************************** Fine-tuning and inference using multiple GPUs ***************************************************** This section explains how to fine-tune a model on a multi-accelerator system. See :doc:`Single-accelerator fine-tuning ` for a single GPU setup. .. _fine-tuning-llms-multi-gpu-env: Environment setup ================= This section was tested using the following hardware and software environment. .. list-table:: :stub-columns: 1 * - Hardware - 4 AMD Instinct MI300X GPUs * - Software - ROCm 6.1, Ubuntu 22.04, PyTorch 2.1.2, Python 3.10 * - Libraries - ``transformers`` ``datasets`` ``accelerate`` ``huggingface-hub`` ``peft`` ``trl`` ``scipy`` * - Base model - ``meta-llama/Llama-2-7b-chat-hf`` .. _fine-tuning-llms-multi-gpu-env-setup: Setting up the base implementation environment ---------------------------------------------- #. Install PyTorch for ROCm. Refer to the :doc:`PyTorch installation guide `. For consistent installation, it’s recommended to use official ROCm prebuilt Docker images with the framework pre-installed. #. In the Docker container, check the availability of ROCm-capable GPUs using the following command. .. code-block:: shell rocm-smi --showproductname #. Check that your GPUs are available to PyTorch. .. code-block:: python import torch print("Is a ROCm-GPU detected? ", torch.cuda.is_available()) print("How many ROCm-GPUs are detected? ", torch.cuda.device_count()) If successful, your output should look like this: .. code-block:: shell >>> print("Is a ROCm-GPU detected? ", torch.cuda.is_available()) Is a ROCm-GPU detected? True >>> print("How many ROCm-GPUs are detected? ", torch.cuda.device_count()) How many ROCm-GPUs are detected? 4 .. tip:: During training and inference, you can check the memory usage by running the ``rocm-smi`` command in your terminal. This tool helps you see shows which GPUs are involved. .. _fine-tuning-llms-multi-gpu-hugging-face-accelerate: Hugging Face Accelerate for fine-tuning and inference =========================================================== `Hugging Face Accelerate `__ is a library that simplifies turning raw PyTorch code for a single GPU into code for multiple GPUs for LLM fine-tuning and inference. It is integrated with `Transformers `__, so you can scale your PyTorch code while maintaining performance and flexibility. As a brief example of model fine-tuning and inference using multiple GPUs, let's use Transformers and load in the Llama 2 7B model. Here, let's reuse the code in :ref:`Single-accelerator fine-tuning ` to load the base model and tokenizer. Now, it's important to adjust how you load the model. Add the ``device_map`` parameter to your base model configuration. .. code-block:: python ... base_model_name = "meta-llama/Llama-2-7b-chat-hf" # Load base model to GPU memory base_model = AutoModelForCausalLM.from_pretrained( base_model_name, device_map = "auto", trust_remote_code = True) ... # Run training sft_trainer.train() .. note:: You can let Accelerate handle the device map computation by setting ``device_map`` to one of the supported options (``"auto"``, ``"balanced"``, ``"balanced_low_0"``, ``"sequential"``). It's recommended to set the ``device_map`` parameter to ``“auto”`` to allow Accelerate to automatically and efficiently allocate the model given the available resources (four GPUs in this case). When you have more GPU memory available than the model size, here is the difference between each ``device_map`` option: * ``"auto"`` and ``"balanced"`` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1. * ``"balanced_low_0"`` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the generate function for Transformers models. * ``"sequential"`` will fit what it can on GPU 0, then move on GPU 1 and so forth. Not all GPUs might be used. After loading the model in this way, the model is fully ready to use the resources available to it. .. _fine-tuning-llms-multi-gpu-torchtune: torchtune for fine-tuning and inference ============================================= `torchtune `_ is a PyTorch-native library for easy single and multi-GPU model fine-tuning and inference with LLMs. #. Install torchtune using pip. .. code-block:: shell # Install torchtune with PyTorch release 2.2.2+ pip install torchtune # To confirm that the package is installed correctly tune --help The output should look like this: .. code-block:: shell usage: tune [-h] {download,ls,cp,run,validate} ... Welcome to the TorchTune CLI! options: -h, --help show this help message and exit subcommands: {download,ls,cp,run,validate} #. torchtune recipes are designed around easily composable components and workable training loops, with minimal abstraction getting in the way of fine-tuning. Run ``tune ls`` to show built-in torchtune configuration recipes. .. code-block:: shell RECIPE CONFIG full_finetune_single_device llama2/7B_full_low_memory llama3/8B_full_single_device mistral/7B_full_low_memory full_finetune_distributed llama2/7B_full llama2/13B_full llama3/8B_full mistral/7B_full gemma/2B_full lora_finetune_single_device llama2/7B_lora_single_device llama2/7B_qlora_single_device llama3/8B_lora_single_device llama3/8B_qlora_single_device llama2/13B_qlora_single_device mistral/7B_lora_single_device The ``RECIPE`` column shows the easy-to-use and workable fine-tuning and inference recipes for popular fine-tuning techniques (such as LoRA). The ``CONFIG`` column lists the YAML configurations for easily configuring training, evaluation, quantization, or inference recipes. The snippet shows the architecture of a model's YAML configuration file: .. code-block:: yaml # Model arguments model: _component_: torchtune.models.llama2.lora_llama2_7b lora_attn_modules: ['q_proj', 'v_proj'] apply_lora_to_mlp: False apply_lora_to_output: False lora_rank: 8 lora_alpha: 16 tokenizer: _component_: torchtune.models.llama2.llama2_tokenizer path: /tmp/Llama-2-7b-hf/tokenizer.model # Dataset and sampler dataset: _component_: torchtune.datasets.alpaca_cleaned_dataset train_on_input: True #. This configuration file defines the fine-tuning base model path, data set, hyper-parameters for optimizer and scheduler, and training data type. To download the base model for fine-tuning, run the following command: .. code-block:: shell tune download meta-llama/Llama-2-7b-hf --output-dir /tmp/Llama-2-7b-hf --hf-token The output directory argument for ``--output-dir`` should map the model path specified in YAML config file. #. To launch ``lora_finetune_distributed`` on four devices, run the following command: .. code-block:: shell tune run --nnodes 1 --nproc_per_node 4 lora_finetune_distributed --config llama2/7B_lora If successful, you should something like the following output: .. code-block:: shell INFO:torchtune.utils.logging:FSDP is enabled. Instantiating Model on CPU for Rank 0 ... INFO:torchtune.utils.logging:Model instantiation took 7.32 secs INFO:torchtune.utils.logging:Memory Stats after model init: {'peak_memory_active': 9.478172672, 'peak_memory_alloc': 8.953868288, 'peak_memory_reserved': 11.112808448} INFO:torchtune.utils.logging:Optimizer and loss are initialized. INFO:torchtune.utils.logging:Dataset and Sampler are initialized. INFO:torchtune.utils.logging:Learning rate scheduler is initialized. 1|111|Loss: 1.5790324211120605: 7%|█ | 114/1618 Read more about inference frameworks in :doc:`LLM inference frameworks <../inference/llm-inference-frameworks>`. --- .. meta:: :description: Conceptual overview of fine-tuning LLMs :keywords: ROCm, LLM, Llama, fine-tuning, usage, tutorial, optimzation, LoRA, walkthrough, PEFT, Reinforcement *************************************** Conceptual overview of fine-tuning LLMs *************************************** Large language models (LLMs) are trained on massive amounts of text data to generate coherent and fluent text. The underlying *transformer* architecture is the fundamental building block of all LLMs. Transformers enable LLMs to understand and generate text by capturing contextual relationships and long-range dependencies. To better understand the philosophy of the transformer architecture, review the foundational `Attention is all you need `_ paper. By further training pre-trained LLMs, the fine-tuned model can gain knowledge related to specific fields or tasks, thereby significantly improving its performance in that field or task. The core idea of fine-tuning is to use the parameters of the pre-trained model as the starting point for new tasks and shape it through a small amount of specific domain or task data, expanding the original model's capability to new tasks or datasets. Fine-tuning can effectively improve the performance of existing pre-trained models in specific application scenarios. Continuous training and adjustment of the parameters of the base model in the target domain or task can better capture the semantic characteristics and patterns in specific scenarios, thereby significantly improving the key indicators of the model in that domain or task. For example, by fine-tuning the Llama 2 model, its performance in certain applications can be improve over the base model. .. _fine-tuning-llms-concept-challenge: The challenge of fine-tuning models =================================== However, the computational cost of fine-tuning is still high, especially for complex models and large datasets, which poses distinct challenges related to substantial computational and memory requirements. This might be a barrier for GPUs with low computing power or limited device memory resources. For example, suppose we have a language model with 7 billion (7B) parameters, represented by a weight matrix :math:`W`. During backpropagation, the model needs to learn a :math:`ΔW` matrix, which updates the original weights to minimize the value of the loss function. The weight update is as follows: :math:`W_{updated} = W + ΔW`. If the weight matrix :math:`W` contains 7B parameters, then the weight update matrix :math:`ΔW` should also contain 7B parameters. Therefore, the :math:`ΔW` calculation is computationally and memory intensive. .. figure:: ../../../data/how-to/llm-fine-tuning-optimization/weight-update.png :alt: Weight update diagram (a) Weight update in regular fine-tuning. (b) Weight update in LoRA where the product of matrix A (:math:`M\times K`) and matrix B (:math:`K\times N`) is :math:`ΔW(M\times N)`; dimension K is a hyperparameter. By representing :math:`ΔW` as the product of two smaller matrices (A and B) with a lower rank K, the number of trainable parameters is significantly reduced. .. _fine-tuning-llms-concept-optimizations: Optimizations for model fine-tuning =================================== Low-Rank Adaptation (LoRA) is a technique allowing fast and cost-effective fine-tuning of state-of-the-art LLMs that can overcome this issue of high memory consumption. LoRA accelerates the adjustment process and reduces related memory costs. To be precise, LoRA decomposes the portion of weight changes :math:`ΔW` into high-precision low-rank representations, which do not require the calculations of all :math:`ΔW`. It learns the decomposition representation of :math:`ΔW` during training, as shown in the :ref:`weight update diagram `. This is how LoRA saves on computing resources. LoRA is integrated into the `Hugging Face Parameter-Efficient Fine-Tuning (PEFT) `_ library, as well as other computation and memory efficiency optimization variants for model fine-tuning such as `AdaLoRA `_. This library efficiently adapts large pre-trained models to various downstream applications without fine-tuning all model parameters. PEFT methods only fine-tune a few model parameters, significantly decreasing computational and storage costs while yielding performance comparable to a fully fine-tuned model. PEFT is integrated with the `Hugging Face Transformers `_ library, providing a faster and easier way to load, train, and use large models for inference. To simplify running a fine-tuning implementation, the `Transformer Reinforcement Learning (TRL) `_ library provides a set of tools to train transformer language models with reinforcement learning, from the Supervised Fine-Tuning step (SFT), Reward Modeling step (RM), to the Proximal Policy Optimization (PPO) step. The ``SFTTrainer`` API in TRL encapsulates these PEFT optimizations so you can easily import their custom training configuration and run the training process. .. _fine-tuning-llms-walkthrough-desc: Walkthrough =========== To demonstrate the benefits of LoRA and the ideal compute compatibility of using PEFT and TRL libraries on AMD ROCm-compatible GPUs, let's step through a comprehensive implementation of the fine-tuning process using the Llama 2 7B model with LoRA tailored specifically for question-and-answer tasks on AMD MI300X GPUs. Before starting, review and understand the key components of this walkthrough: - `Llama 2 `_: a family of large language models developed and publicly released by Meta. Its variants range in scale from 7 billion to 70 billion parameters. - Fine-tuning: a critical process that refines LLMs for specialized tasks and optimizes performance. - LoRA: a memory-efficient implementation of LLM fine-tuning that significantly reduces the number of trainable parameters. - `SFTTrainer `_: an optimized trainer with a simple interface to easily fine-tune pre-trained models with PEFT adapters, for example, LoRA, for memory efficiency purposes on a custom dataset. Continue the walkthrough in :doc:`Fine-tuning and inference `. --- .. meta:: :description: Model fine-tuning and inference on a single-GPU system :keywords: ROCm, LLM, fine-tuning, usage, tutorial, single-GPU, LoRA, PEFT, inference, SFTTrainer **************************************************** Fine-tuning and inference using a single GPU **************************************************** This section explains model fine-tuning and inference techniques on a single-accelerator system. See :doc:`Multi-accelerator fine-tuning ` for a setup with multiple GPUs. .. _fine-tuning-llms-single-gpu-env: Environment setup ================= This section was tested using the following hardware and software environment. .. list-table:: :stub-columns: 1 * - Hardware - AMD Instinct MI300X GPU * - Software - ROCm 6.1, Ubuntu 22.04, PyTorch 2.1.2, Python 3.10 * - Libraries - ``transformers`` ``datasets`` ``huggingface-hub`` ``peft`` ``trl`` ``scipy`` * - Base model - ``meta-llama/Llama-2-7b-chat-hf`` .. _fine-tuning-llms-single-gpu-env-setup: Setting up the base implementation environment ---------------------------------------------- #. Install PyTorch for ROCm. Refer to the :doc:`PyTorch installation guide `. For a consistent installation, it’s recommended to use official ROCm prebuilt Docker images with the framework pre-installed. #. In the Docker container, check the availability of ROCm-capable GPUs using the following command. .. code-block:: shell rocm-smi --showproductname Your output should look like this: .. code-block:: shell ============================ ROCm System Management Interface ============================ ====================================== Product Info ====================================== GPU[0] : Card Series: AMD Instinct MI300X OAM GPU[0] : Card model: 0x74a1 GPU[0] : Card vendor: Advanced Micro Devices, Inc. [AMD/ATI] GPU[0] : Card SKU: MI3SRIOV ========================================================================================== ================================== End of ROCm SMI Log =================================== #. Check that your GPUs are available to PyTorch. .. code-block:: python import torch print("Is a ROCm-GPU detected? ", torch.cuda.is_available()) print("How many ROCm-GPUs are detected? ", torch.cuda.device_count()) If successful, your output should look like this: .. code-block:: shell >>> print("Is a ROCm-GPU detected? ", torch.cuda.is_available()) Is a ROCm-GPU detected? True >>> print("How many ROCm-GPUs are detected? ", torch.cuda.device_count()) How many ROCm-GPUs are detected? 4 #. Install the required dependencies. bitsandbytes is a library that facilitates quantization to improve the efficiency of deep learning models. Learn more about its use in :doc:`../inference-optimization/model-quantization`. See the :ref:`Optimizations for model fine-tuning ` for a brief discussion on PEFT and TRL. .. code-block:: shell # Install `bitsandbytes` for ROCm 6.0+. # Use -DBNB_ROCM_ARCH to target a specific GPU architecture. git clone --recurse https://github.com/ROCm/bitsandbytes.git cd bitsandbytes git checkout rocm_enabled_multi_backend pip install -r requirements-dev.txt cmake -DBNB_ROCM_ARCH="gfx942" -DCOMPUTE_BACKEND=hip -S . python setup.py install # To leverage the SFTTrainer in TRL for model fine-tuning. pip install trl # To leverage PEFT for efficiently adapting pre-trained language models . pip install peft # Install the other dependencies. pip install transformers datasets huggingface-hub scipy #. Check that the required packages can be imported. .. code-block:: python import torch from datasets import load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, TrainingArguments ) from peft import LoraConfig from trl import SFTTrainer .. _fine-tuning-llms-single-gpu-download-model-dataset: Download the base model and fine-tuning dataset ----------------------------------------------- #. Request to access to download the `Meta's official Llama model `_ from Hugging Face. After permission is granted, log in with the following command using your personal access tokens: .. code-block:: shell huggingface-cli login .. note:: You can also use the `NousResearch Llama-2-7b-chat-hf `_ as a substitute. It has the same model weights as the original. #. Run the following code to load the base model and tokenizer. .. code-block:: python # Base model and tokenizer names. base_model_name = "meta-llama/Llama-2-7b-chat-hf" # Load base model to GPU memory. device = "cuda:0" base_model = AutoModelForCausalLM.from_pretrained(base_model_name, trust_remote_code = True).to(device) # Load tokenizer. tokenizer = AutoTokenizer.from_pretrained( base_model_name, trust_remote_code = True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" #. Now, let's fine-tune the base model for a question-and-answer task using a small dataset called `mlabonne/guanaco-llama2-1k `_, which is a 1000 sample subset of the `timdettmers/openassistant-guanaco `_ dataset. .. code-block:: # Dataset for fine-tuning. training_dataset_name = "mlabonne/guanaco-llama2-1k" training_dataset = load_dataset(training_dataset_name, split = "train") # Check the data. print(training_dataset) # Dataset 11 is a QA sample in English. print(training_dataset[11]) #. With the base model and the dataset, let's start fine-tuning! .. _fine-tuning-llms-single-gpu-configure-params: Configure fine-tuning parameters -------------------------------- To set up ``SFTTrainer`` parameters, you can use the following code as reference. .. code-block:: python # Training parameters for SFTTrainer. training_arguments = TrainingArguments( output_dir = "./results", num_train_epochs = 1, per_device_train_batch_size = 4, gradient_accumulation_steps = 1, optim = "paged_adamw_32bit", save_steps = 50, logging_steps = 50, learning_rate = 4e-5, weight_decay = 0.001, fp16=False, bf16=False, max_grad_norm = 0.3, max_steps = -1, warmup_ratio = 0.03, group_by_length = True, lr_scheduler_type = "constant", report_to = "tensorboard" ) .. _fine-tuning-llms-single-gpu-start: Fine-tuning =========== In this section, you'll see two ways of training: with the LoRA technique and without. See :ref:`Optimizations for model fine-tuning ` for an introduction to LoRA. Training with LoRA uses the ``SFTTrainer`` API with its PEFT integration. Training without LoRA forgoes these benefits. Compare the number of trainable parameters and training time under the two different methodologies. .. tab-set:: .. tab-item:: Fine-tuning with LoRA and PEFT :sync: with 1. Configure LoRA using the following code snippet. .. code-block:: python peft_config = LoraConfig( lora_alpha = 16, lora_dropout = 0.1, r = 64, bias = "none", task_type = "CAUSAL_LM" ) # View the number of trainable parameters. from peft import get_peft_model peft_model = get_peft_model(base_model, peft_config) peft_model.print_trainable_parameters() The output should look like this. Compare the number of trainable parameters to that when fine-tuning without LoRA and PEFT. .. code-block:: shell trainable params: 33,554,432 || all params: 6,771,970,048 || trainable%: 0.49548996469513035 2. Initialize ``SFTTrainer`` with a PEFT LoRA configuration and run the trainer. .. code-block:: python # Initialize an SFT trainer. sft_trainer = SFTTrainer( model = base_model, train_dataset = training_dataset, peft_config = peft_config, dataset_text_field = "text", tokenizer = tokenizer, args = training_arguments ) # Run the trainer. sft_trainer.train() The output should look like this: .. code-block:: shell {'loss': 1.5973, 'grad_norm': 0.25271978974342346, 'learning_rate': 4e-05, 'epoch': 0.16} {'loss': 2.0519, 'grad_norm': 0.21817368268966675, 'learning_rate': 4e-05, 'epoch': 0.32} {'loss': 1.6147, 'grad_norm': 0.3046981394290924, 'learning_rate': 4e-05, 'epoch': 0.48} {'loss': 1.4124, 'grad_norm': 0.11534837633371353, 'learning_rate': 4e-05, 'epoch': 0.64} {'loss': 1.5627, 'grad_norm': 0.09108350425958633, 'learning_rate': 4e-05, 'epoch': 0.8} {'loss': 1.417, 'grad_norm': 0.2536439299583435, 'learning_rate': 4e-05, 'epoch': 0.96} {'train_runtime': 197.4947, 'train_samples_per_second': 5.063, 'train_steps_per_second': 0.633, 'train_loss': 1.6194254455566406, 'epoch': 1.0} 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 125/125 [03:17<00:00, 1.58s/it] .. tab-item:: Fine-tuning without LoRA and PEFT :sync: without 1. Use the following code to get started. .. code-block:: python def print_trainable_parameters(model): # Prints the number of trainable parameters in the model. trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print(f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}") sft_trainer.peft_config = None print_trainable_parameters(sft_trainer.model) The output should look like this. Compare the number of trainable parameters to that when fine-tuning with LoRA and PEFT. .. code-block:: shell trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.00 2. Run the trainer. .. code-block:: python # Trainer without LoRA config. trainer_full = SFTTrainer( model = base_model, train_dataset = training_dataset, dataset_text_field = "text", tokenizer = tokenizer, args = training_arguments ) # Training. trainer_full.train() The output should look like this: .. code-block:: shell {'loss': 1.5975, 'grad_norm': 0.25113457441329956, 'learning_rate': 4e-05, 'epoch': 0.16} {'loss': 2.0524, 'grad_norm': 0.2180655151605606, 'learning_rate': 4e-05, 'epoch': 0.32} {'loss': 1.6145, 'grad_norm': 0.2949850261211395, 'learning_rate': 4e-05, 'epoch': 0.48} {'loss': 1.4118, 'grad_norm': 0.11036080121994019, 'learning_rate': 4e-05, 'epoch': 0.64} {'loss': 1.5595, 'grad_norm': 0.08962831646203995, 'learning_rate': 4e-05, 'epoch': 0.8} {'loss': 1.4119, 'grad_norm': 0.25422757863998413, 'learning_rate': 4e-05, 'epoch': 0.96} {'train_runtime': 419.5154, 'train_samples_per_second': 2.384, 'train_steps_per_second': 0.298, 'train_loss': 1.6171623611450194, 'epoch': 1.0} 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 125/125 [06:59<00:00, 3.36s/it] .. _fine-tuning-llms-single-gpu-saving: Saving adapters or fully fine-tuned models ------------------------------------------ PEFT methods freeze the pre-trained model parameters during fine-tuning and add a smaller number of trainable parameters, namely the adapters, on top of it. The adapters are trained to learn specific task information. The adapters trained with PEFT are usually an order of magnitude smaller than the full base model, making them convenient to share, store, and load. .. tab-set:: .. tab-item:: Saving a PEFT adapter :sync: with If you're using LoRA and PEFT, use the following code to save a PEFT adapter to your system once the fine-tuning is completed. .. code-block:: python # PEFT adapter name. adapter_name = "llama-2-7b-enhanced-adapter" # Save PEFT adapter. sft_trainer.model.save_pretrained(adapter_name) The saved PEFT adapter should look like this on your system: .. code-block:: shell # Access adapter directory. cd llama-2-7b-enhanced-adapter # List all adapter files. README.md adapter_config.json adapter_model.safetensors .. tab-item:: Saving a fully fine-tuned model :sync: without If you're not using LoRA and PEFT so there is no PEFT LoRA configuration used for training, use the following code to save your fine-tuned model to your system. .. code-block:: python # Fully fine-tuned model name. new_model_name = "llama-2-7b-enhanced" # Save the fully fine-tuned model. full_trainer.model.save_pretrained(new_model_name) The saved new full model should look like this on your system: .. code-block:: shell # Access new model directory. cd llama-2-7b-enhanced # List all model files. config.json model-00002-of-00006.safetensors model-00005-of-00006.safetensors generation_config.json model-00003-of-00006.safetensors model-00006-of-00006.safetensors model-00001-of-00006.safetensors model-00004-of-00006.safetensors model.safetensors.index.json .. note:: PEFT adapters can’t be loaded by ``AutoModelForCausalLM`` from the Transformers library as they do not contain full model parameters and model configurations, for example, ``config.json``. To use it as a normal transformer model, you need to merge them into the base model. Basic model inference ===================== A trained model can be classified into one of three types: * A PEFT adapter * A pre-trained language model in Hugging Face * A fully fine-tuned model not using PEFT Let's look at achieving model inference using these types of models. .. tab-set:: .. tab-item:: Inference using PEFT adapters To use PEFT adapters like a normal transformer model, you can run the generation by loading a base model along with PEFT adapters as follows. .. code-block:: python from peft import PeftModel from transformers import AutoModelForCausalLM # Set the path of the model or the name on Hugging face hub base_model_name = "meta-llama/Llama-2-7b-chat-hf" # Set the path of the adapter adapter_name = "Llama-2-7b-enhanced-adpater" # Load base model base_model = AutoModelForCausalLM.from_pretrained(base_model_name) # Adapt the base model with the adapter new_model = PeftModel.from_pretrained(base_model, adapter_name) # Then, run generation as the same with a normal model outlined in 2.1 The PEFT library provides a ``merge_and_unload`` method, which merges the adapter layers into the base model. This is needed if someone wants to save the adapted model into local storage and use it as a normal standalone model. .. code-block:: python # Load base model base_model = AutoModelForCausalLM.from_pretrained(base_model_name) # Adapt the base model with the adapter new_model = PeftModel.from_pretrained(base_model, adapter_name) # Merge adapter model = model.merge_and_unload() # Save the merged model into local model.save_pretrained("merged_adpaters") .. tab-item:: Inference using pre-trained or fully fine-tuned models If you have a fully fine-tuned model not using PEFT, you can load it like any other pre-trained language model in `Hugging Face Hub `_ using the `Transformers `_ library. .. code-block:: python # Import relevant class for loading model and tokenizer from transformers import AutoTokenizer, AutoModelForCausalLM # Set the pre-trained model name on Hugging face hub model_name = "meta-llama/Llama-2-7b-chat-hf" # Set device type device = "cuda:0" # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name) # Input prompt encoding query = "What is a large language model?" inputs = tokenizer.encode(query, return_tensors="pt").to(device) # Token generation outputs = model.generate(inputs) # Outputs decoding print(tokenizer.decode(outputs[0])) In addition, pipelines from Transformers offer simple APIs to use pre-trained models for different tasks, including sentiment analysis, feature extraction, question answering and so on. You can use the pipeline abstraction to achieve model inference easily. .. code-block:: python # Import relevant class for loading model and tokenizer from transformers import pipeline # Set the path of your model or the name on Hugging face hub model_name_or_path = "meta-llama/Llama-2-7b-chat-hf" # Set pipeline # A positive device value will run the model on associated CUDA device id pipe = pipeline("text-generation", model=model_name_or_path, device=0) # Token generation print(pipe("What is a large language model?")[0]["generated_text"]) If using multiple GPUs, see :ref:`Multi-accelerator fine-tuning and inference ` to explore popular libraries that simplify fine-tuning and inference in a multiple-GPU system. Read more about inference frameworks like vLLM and Hugging Face TGI in :doc:`LLM inference frameworks <../inference/llm-inference-frameworks>`. --- .. meta:: :description: Learn how to use ROCm for AI. :keywords: ROCm, AI, machine learning, LLM, usage, tutorial ************************** Use ROCm for AI ************************** ROCm is an open-source software platform that enables high-performance computing and machine learning applications. It features the ability to accelerate training, fine-tuning, and inference for AI application development. With ROCm, you can access the full power of AMD GPUs, which can significantly improve the performance and efficiency of AI workloads. You can use ROCm to perform distributed training, which enables you to train models across multiple GPUs or nodes simultaneously. Additionally, ROCm supports mixed-precision training, which can help reduce the memory and compute requirements of training workloads. For fine-tuning, ROCm provides access to various algorithms and optimization techniques. In terms of inference, ROCm provides several techniques that can help you optimize your models for deployment, such as quantization, GEMM tuning, and optimization with composable kernel. Overall, ROCm can be used to improve the performance and efficiency of your AI applications. With its training, fine-tuning, and inference support, ROCm provides a complete solution for optimizing AI workflows and achieving the optimum results possible on AMD GPUs. The AI Developer Hub contains `AMD ROCm tutorials `_ for training, fine-tuning, and inference. It leverages popular machine learning frameworks on AMD GPUs. In this guide, you'll learn how to use ROCm for AI: - :doc:`Training ` - :doc:`Fine-tuning LLMs ` - :doc:`Inference ` - :doc:`Inference optimization ` To learn about ROCm for HPC applications and scientific computing, see :doc:`../rocm-for-hpc/index`. --- :orphan: **************************************************** SGLang inference performance testing version history **************************************************** This table lists previous versions of the ROCm SGLang inference performance testing environment. For detailed information about available models for benchmarking, see the version-specific documentation. .. list-table:: :header-rows: 1 * - Docker image tag - Components - Resources * - ``lmsysorg/sglang:v0.4.5-rocm630`` - * ROCm 6.3.0 * SGLang 0.4.5 * PyTorch 2.6.0 - * :doc:`Documentation <../sglang>` * `Docker Hub `__ --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker-812: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.0_20250812-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: .. list-table:: :header-rows: 1 * - Software component - Version * - `ROCm `__ - {{ unified_docker.rocm_version }} * - `vLLM `__ - {{ unified_docker.vllm_version }} * - `PyTorch `__ - {{ unified_docker.pytorch_version }} * - `hipBLASLt `__ - {{ unified_docker.hipblaslt_version }} With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. What's new ========== The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release `. * Upgraded to vLLM v0.10. * FP8 KV cache support via AITER. * Full graph capture support via AITER. Supported models ================ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.0_20250812-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} .. _vllm-benchmark-available-models-812: The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model group
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-812: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-812: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and serving measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.0_20250812-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-812: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. Although the :ref:`available models ` are preconfigured to collect offline throughput and online serving performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required scripts 1. Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ unified_docker.pull_tag }} 2. In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm 3. To start the benchmark, use the following command with the appropriate options. .. code-block:: ./run.sh \ --config $CONFIG_CSV \ --model_repo {{ model.model_repo }} \ .. dropdown:: Benchmark options :open: .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``--config`` - ``configs/default.csv`` - Run configs from the CSV for the chosen model repo and benchmark. * - - ``configs/extended.csv`` - * - - ``configs/performance.csv`` - * - ``--benchmark`` - ``throughput`` - Measure offline end-to-end throughput. * - - ``serving`` - Measure online serving performance. * - - ``all`` - Measure both throughput and serving. * - `` - See `run.sh `__ for more info. - Additional overrides to the config CSV. The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: For best performance, it's recommended to run with ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1``. If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. rubric:: Benchmarking examples Here are some examples of running the benchmark with various options: * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell export MAD_MODEL_NAME={{ model.mad_tag }} ./run.sh \ --config configs/default.csv \ --model_repo {{model.model_repo}} \ --benchmark throughput Find the throughput benchmark report at ``./{{ model.mad_tag }}_throughput.csv``. * Serving benchmark Use this command to benchmark the serving performance of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: export MAD_MODEL_NAME={{ model.mad_tag }} ./run.sh \ --config configs/default.csv \ --model_repo {{model.model_repo}} \ --benchmark serving Find the serving benchmark report at ``./{{ model.mad_tag }}_serving.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Advanced usage ============== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. Reproducing the Docker image ---------------------------- To reproduce this ROCm/vLLM Docker image release, follow these steps: 1. Clone the `vLLM repository `__. .. code-block:: shell git clone https://github.com/ROCm/vllm.git 2. Checkout the specific release commit. .. code-block:: shell cd vllm git checkout 340ea86dfe5955d6f9a9e767d6abab5aacf2c978 3. Build the Docker image. Replace ``vllm-rocm`` with your desired image tag. .. code-block:: shell docker build -f docker/Dockerfile.rocm -t vllm-rocm . Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker-909: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20250909-benchmark-models.yaml {% set docker = data.dockers[0] %} The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series accelerators. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series accelerators and includes the following components: .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series accelerators. What's new ========== The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release `. * Upgraded to vLLM v0.10.1. * Set ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1`` by default for better performance. * Set ``VLLM_ROCM_USE_AITER_RMSNORM=0`` by default to avoid various issues with torch compile. .. _vllm-benchmark-supported-models-909: Supported models ================ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20250909-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. _vllm-benchmark-available-models-909: The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-909: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% if model.precision == "float8" and model.model_repo.startswith("amd") %} This model uses FP8 quantization via `AMD Quark `__ for efficient inference on AMD accelerators. {% endif %} {% endfor %} {% endfor %} .. _vllm-benchmark-performance-measurements-909: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and serving measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X accelerators or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20250909-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-909: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking The following run command is tailored to {{ model.model }}. See :ref:`vllm-benchmark-supported-models-909` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. Although the :ref:`available models ` are preconfigured to collect offline throughput and online serving performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking The following commands are optimized for {{ model.model }}. See :ref:`vllm-benchmark-supported-models-909` to switch to another available model. .. seealso:: For more information on configuration, see the `config files `__ in the MAD repository. Refer to the `vLLM engine `__ for descriptions of available configuration options and `Benchmarking vLLM `__ for additional benchmarking information. .. rubric:: Launch the container You can run the vLLM benchmark tool independently by starting the `Docker container <{{ docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: shell docker pull {{ docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ docker.pull_tag }} .. rubric:: Throughput command Use the following command to start the throughput benchmark. .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} num_prompts=1024 in=128 out=128 dtype={{ model.config.dtype }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs=1024 max_seq_len_to_capture={{ model.config.max_seq_len_to_capture }} max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm bench throughput --model $model \ -tp $tp \ --num-prompts $num_prompts \ --input-len $in \ --output-len $out \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-seq-len-to-capture $max_seq_len_to_capture \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --trust-remote-code \ --output-json ${model}_throughput.json \ --gpu-memory-utilization 0.9 .. rubric:: Serving command 1. Start the server using the following command: .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} dtype={{ model.config.dtype }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs=256 max_seq_len_to_capture={{ model.config.max_seq_len_to_capture }} max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm serve $model \ -tp $tp \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-seq-len-to-capture $max_seq_len_to_capture \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --no-enable-prefix-caching \ --swap-space 16 \ --disable-log-requests \ --trust-remote-code \ --gpu-memory-utilization 0.9 Wait until the model has loaded and the server is ready to accept requests. 2. On another terminal on the same machine, run the benchmark: .. code-block:: shell # Connect to the container docker exec -it test bash # Wait for the server to start until curl -s http://localhost:8000/v1/models; do sleep 30; done # Run the benchmark model={{ model.model_repo }} max_concurrency=1 num_prompts=10 in=128 out=128 vllm bench serve --model $model \ --percentile-metrics "ttft,tpot,itl,e2el" \ --dataset-name random \ --ignore-eos \ --max-concurrency $max_concurrency \ --num-prompts $num_prompts \ --random-input-len $in \ --random-output-len $out \ --trust-remote-code \ --save-result \ --result-filename ${model}_serving.json .. note:: For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B, try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands. If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Advanced usage ============== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. Reproducing the Docker image ---------------------------- To reproduce this ROCm/vLLM Docker image release, follow these steps: 1. Clone the `vLLM repository `__. .. code-block:: shell git clone https://github.com/ROCm/vllm.git 2. Checkout the specific release commit. .. code-block:: shell cd vllm git checkout 6663000a391911eba96d7864a26ac42b07f6ef29 3. Build the Docker image. Replace ``vllm-rocm`` with your desired image tag. .. code-block:: shell docker build -f docker/Dockerfile.rocm -t vllm-rocm . Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series accelerators, see `AMD Instinct MI300X system optimization `_. - See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for a brief introduction to vLLM and optimization strategies. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker-930: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml {% set docker = data.dockers[0] %} The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI355X, MI350X, MI325X and MI300X GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for AMD data center GPUs and includes the following components: .. tab-set:: .. tab-item:: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for AMD Instinct GPUs. What's new ========== The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release `. * Added support for AMD Instinct MI355X and MI350X GPUs. * Added support and benchmarking instructions for the following models. See :ref:`vllm-benchmark-supported-models-930`. * Llama 4 Scout and Maverick * DeepSeek R1 0528 FP8 * MXFP4 models (MI355X and MI350X only): Llama 3.3 70B MXFP4 and Llama 3.1 405B MXFP4 * GPT OSS 20B and 120B * Qwen 3 32B, 30B-A3B, and 235B-A22B * Removed the deprecated ``--max-seq-len-to-capture`` flag. * ``--gpu-memory-utilization`` is now configurable via the `configuration files `__ in the MAD repository. .. _vllm-benchmark-supported-models-930: Supported models ================ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. _vllm-benchmark-available-models-930: The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. MXFP4 models are only supported on MI355X and MI350X GPUs. .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-930: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} {% if model.precision == "float4" %} .. important:: MXFP4 is supported only on MI355X and MI350X GPUs. {% endif %} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% if model.precision == "float8" and model.model_repo.startswith("amd") %} This model uses FP8 quantization via `AMD Quark `__ for efficient inference on AMD GPUs. {% endif %} {% if model.precision == "float4" and model.model_repo.startswith("amd") %} This model uses FP4 quantization via `AMD Quark `__ for efficient inference on AMD GPUs. {% endif %} {% endfor %} {% endfor %} .. _vllm-benchmark-performance-measurements-930: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and serving measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml {% set docker = data.dockers[0] %} Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} Benchmarking ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-930: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking The following run command is tailored to {{ model.model }}. See :ref:`vllm-benchmark-supported-models-930` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node with the :literal:`{{model.precision}}` data type. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. Although the :ref:`available models ` are preconfigured to collect offline throughput and online serving performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking The following commands are optimized for {{ model.model }}. See :ref:`vllm-benchmark-supported-models-930` to switch to another available model. .. seealso:: For more information on configuration, see the `config files `__ in the MAD repository. Refer to the `vLLM engine `__ for descriptions of available configuration options and `Benchmarking vLLM `__ for additional benchmarking information. .. rubric:: Launch the container You can run the vLLM benchmark tool independently by starting the `Docker container <{{ docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: shell docker pull {{ docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ docker.pull_tag }} .. rubric:: Throughput command Use the following command to start the throughput benchmark. .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} num_prompts={{ model.config.num_prompts | default(1024) }} in={{ model.config.in | default(128) }} out={{ model.config.in | default(128) }} dtype={{ model.config.dtype | default("auto") }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs={{ model.config.max_num_seqs | default(1024) }} max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm bench throughput --model $model \ -tp $tp \ --num-prompts $num_prompts \ --input-len $in \ --output-len $out \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --trust-remote-code \ --output-json ${model}_throughput.json \ --gpu-memory-utilization {{ model.config.gpu_memory_utilization | default(0.9) }} .. rubric:: Serving command 1. Start the server using the following command: .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} dtype={{ model.config.dtype }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs=256 max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm serve $model \ -tp $tp \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --no-enable-prefix-caching \ --swap-space 16 \ --disable-log-requests \ --trust-remote-code \ --gpu-memory-utilization 0.9 Wait until the model has loaded and the server is ready to accept requests. 2. On another terminal on the same machine, run the benchmark: .. code-block:: shell # Connect to the container docker exec -it test bash # Wait for the server to start until curl -s http://localhost:8000/v1/models; do sleep 30; done # Run the benchmark model={{ model.model_repo }} max_concurrency=1 num_prompts=10 in=128 out=128 vllm bench serve --model $model \ --percentile-metrics "ttft,tpot,itl,e2el" \ --dataset-name random \ --ignore-eos \ --max-concurrency $max_concurrency \ --num-prompts $num_prompts \ --random-input-len $in \ --random-output-len $out \ --trust-remote-code \ --save-result \ --result-filename ${model}_serving.json .. note:: For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B, try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands. If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Advanced usage ============== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. Reproducing the Docker image ---------------------------- To reproduce this ROCm-enabled vLLM Docker image release, follow these steps: 1. Clone the `vLLM repository `__. .. code-block:: shell git clone https://github.com/vllm-project/vllm.git cd vllm 2. Use the following command to build the image directly from the specified commit. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml {% set docker = data.dockers[0] %} .. code-block:: shell docker build -f docker/Dockerfile.rocm \ --build-arg REMOTE_VLLM=1 \ --build-arg VLLM_REPO=https://github.com/ROCm/vllm \ --build-arg VLLM_BRANCH="{{ docker.dockerfile.commit }}" \ -t vllm-rocm . .. tip:: Replace ``vllm-rocm`` with your desired image tag. Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for a brief introduction to vLLM and optimization strategies. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker-1103: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml {% set docker = data.dockers[0] %} The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI355X, MI350X, MI325X and MI300X GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for AMD data center GPUs and includes the following components: .. tab-set:: .. tab-item:: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for AMD Instinct GPUs. What's new ========== The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release `. * Enabled :ref:`AITER ` by default. * Fixed ``rms_norm`` segfault issue with Qwen 3 235B. * Known performance degradation on Llama 4 models due to `an upstream vLLM issue `_. .. _vllm-benchmark-supported-models-1103: Supported models ================ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. _vllm-benchmark-available-models-1103: The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. MXFP4 models are only supported on MI355X and MI350X GPUs. .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-1103: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} {% if model.precision == "float4" %} .. important:: MXFP4 is supported only on MI355X and MI350X GPUs. {% endif %} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% if model.precision == "float8" and model.model_repo.startswith("amd") %} This model uses FP8 quantization via `AMD Quark `__ for efficient inference on AMD GPUs. {% endif %} {% if model.precision == "float4" and model.model_repo.startswith("amd") %} This model uses FP4 quantization via `AMD Quark `__ for efficient inference on AMD GPUs. {% endif %} {% endfor %} {% endfor %} .. _vllm-benchmark-performance-measurements-1103: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and serving measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml {% set docker = data.dockers[0] %} Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} Benchmarking ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-1103: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking The following run command is tailored to {{ model.model }}. See :ref:`vllm-benchmark-supported-models-1103` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node with the :literal:`{{model.precision}}` data type. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. Although the :ref:`available models ` are preconfigured to collect offline throughput and online serving performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking The following commands are optimized for {{ model.model }}. See :ref:`vllm-benchmark-supported-models-1103` to switch to another available model. .. seealso:: For more information on configuration, see the `config files `__ in the MAD repository. Refer to the `vLLM engine `__ for descriptions of available configuration options and `Benchmarking vLLM `__ for additional benchmarking information. .. rubric:: Launch the container You can run the vLLM benchmark tool independently by starting the `Docker container <{{ docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: shell docker pull {{ docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ docker.pull_tag }} .. rubric:: Throughput command Use the following command to start the throughput benchmark. .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} num_prompts={{ model.config.num_prompts | default(1024) }} in={{ model.config.in | default(128) }} out={{ model.config.in | default(128) }} dtype={{ model.config.dtype | default("auto") }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs={{ model.config.max_num_seqs | default(1024) }} max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm bench throughput --model $model \ -tp $tp \ --num-prompts $num_prompts \ --input-len $in \ --output-len $out \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --trust-remote-code \ --output-json ${model}_throughput.json \ --gpu-memory-utilization {{ model.config.gpu_memory_utilization | default(0.9) }} .. rubric:: Serving command 1. Start the server using the following command: .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} dtype={{ model.config.dtype }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs=256 max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm serve $model \ -tp $tp \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --no-enable-prefix-caching \ --swap-space 16 \ --disable-log-requests \ --trust-remote-code \ --gpu-memory-utilization 0.9 Wait until the model has loaded and the server is ready to accept requests. 2. On another terminal on the same machine, run the benchmark: .. code-block:: shell # Connect to the container docker exec -it test bash # Wait for the server to start until curl -s http://localhost:8000/v1/models; do sleep 30; done # Run the benchmark model={{ model.model_repo }} max_concurrency=1 num_prompts=10 in=128 out=128 vllm bench serve --model $model \ --percentile-metrics "ttft,tpot,itl,e2el" \ --dataset-name random \ --ignore-eos \ --max-concurrency $max_concurrency \ --num-prompts $num_prompts \ --random-input-len $in \ --random-output-len $out \ --trust-remote-code \ --save-result \ --result-filename ${model}_serving.json .. note:: For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B, try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands. If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Advanced usage ============== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. .. note:: If you’re using this Docker image on other AMD GPUs such as the AMD Instinct MI200 Series or Radeon, add ``export VLLM_ROCM_USE_AITER=0`` to your command, since AITER is only supported on gfx942 and gfx950 architectures. Reproducing the Docker image ---------------------------- To reproduce this ROCm-enabled vLLM Docker image release, follow these steps: 1. Clone the `vLLM repository `__. .. code-block:: shell git clone https://github.com/vllm-project/vllm.git cd vllm 2. Use the following command to build the image directly from the specified commit. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml {% set docker = data.dockers[0] %} .. code-block:: shell docker build -f docker/Dockerfile.rocm \ --build-arg REMOTE_VLLM=1 \ --build-arg VLLM_REPO=https://github.com/ROCm/vllm \ --build-arg VLLM_BRANCH="{{ docker.dockerfile.commit }}" \ -t vllm-rocm . .. tip:: Replace ``vllm-rocm`` with your desired image tag. Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for a brief introduction to vLLM and optimization strategies. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the unified ROCm Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: The `ROCm vLLM Docker `_ image offers a prebuilt, optimized environment designed for validating large language model (LLM) inference performance on the AMD Instinct™ MI300X GPU. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the MI300X GPU and includes the following components: * `ROCm 6.2.0 `_ * `vLLM 0.4.3 `_ * `PyTorch 2.4.0 `_ * Tuning files (in CSV format) With this Docker image, you can quickly validate the expected inference performance numbers on the MI300X GPU. This topic also provides tips on optimizing performance with popular AI models. .. _vllm-benchmark-vllm: .. note:: vLLM is a toolkit and library for LLM inference and serving. It deploys the PagedAttention algorithm, which reduces memory consumption and increases throughput by leveraging dynamic key and value allocation in GPU memory. vLLM also incorporates many LLM acceleration and quantization algorithms. In addition, AMD implements high-performance custom kernels and modules in vLLM to enhance performance further. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. Getting started =============== Use the following procedures to reproduce the benchmark results on an MI300X GPU with the prebuilt vLLM Docker image. .. _vllm-benchmark-get-started: 1. Disable NUMA auto-balancing. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 2. Download the :ref:`ROCm vLLM Docker image `. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/vllm:rocm6.2_mi300_ubuntu22.04_py3.9_vllm_7c5fd50 Once setup is complete, you can choose between two options to reproduce the benchmark results: - :ref:`MAD-integrated benchmarking ` - :ref:`Standalone benchmarking ` .. _vllm-benchmark-mad-v043: MAD-integrated benchmarking =========================== Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run a performance benchmark test of the Llama 3.1 8B model on one GPU with ``float16`` data type in the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags pyt_vllm_llama-3.1-8b --keep-model-dir --live-output --timeout 28800 ROCm MAD launches a Docker container with the name ``container_ci-pyt_vllm_llama-3.1-8b``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_float16/`` Although the following eight models are pre-configured to collect latency and throughput performance data, users can also change the benchmarking parameters. Refer to the :ref:`Standalone benchmarking ` section. Available models ---------------- .. hlist:: :columns: 3 * ``pyt_vllm_llama-3.1-8b`` * ``pyt_vllm_llama-3.1-70b`` * ``pyt_vllm_llama-3.1-405b`` * ``pyt_vllm_llama-2-7b`` * ``pyt_vllm_mistral-7b`` * ``pyt_vllm_qwen2-7b`` * ``pyt_vllm_jais-13b`` * ``pyt_vllm_jais-30b`` .. _vllm-benchmark-standalone-v043: Standalone benchmarking ======================= You can run the vLLM benchmark tool independently by starting the :ref:`Docker container ` as shown in the following snippet. .. code-block:: docker pull rocm/vllm:rocm6.2_mi300_ubuntu22.04_py3.9_vllm_7c5fd50 docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name unified_docker_vllm rocm/vllm:rocm6.2_mi300_ubuntu22.04_py3.9_vllm_7c5fd50 In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm Multiprocessing distributed executor -------------------------------------- To optimize vLLM performance, add the multiprocessing API server argument ``--distributed-executor-backend mp``. Command ^^^^^^^^^^^^^^^^^^^^^^^^^ To start the benchmark, use the following command with the appropriate options. See :ref:`Options ` for the list of options and their descriptions. .. code-block:: shell ./vllm_benchmark_report.sh -s $test_option -m $model_repo -g $num_gpu -d $datatype See the :ref:`examples ` for more information. .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: shell OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. _vllm-benchmark-standalone-options-v043: Options ^^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :header-rows: 1 * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$model_repo`` - ``meta-llama/Meta-Llama-3.1-8B-Instruct`` - Llama 3.1 8B * - (``float16``) - ``meta-llama/Meta-Llama-3.1-70B-Instruct`` - Llama 3.1 70B * - - ``meta-llama/Meta-Llama-3.1-405B-Instruct`` - Llama 3.1 405B * - - ``meta-llama/Llama-2-7b-chat-hf`` - Llama 2 7B * - - ``mistralai/Mixtral-8x7B-Instruct-v0.1`` - Mixtral 8x7B * - - ``mistralai/Mixtral-8x22B-Instruct-v0.1`` - Mixtral 8x22B * - - ``mistralai/Mistral-7B-Instruct-v0.3`` - Mixtral 7B * - - ``Qwen/Qwen2-7B-Instruct`` - Qwen2 7B * - - ``core42/jais-13b-chat`` - JAIS 13B * - - ``core42/jais-30b-chat-v3`` - JAIS 30B * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` - Data type .. _vllm-benchmark-run-benchmark-v043: Running the benchmark on the MI300X GPU ----------------------------------------------- Here are some examples of running the benchmark with various options. See :ref:`Options ` for the list of options and their descriptions. Latency benchmark example ^^^^^^^^^^^^^^^^^^^^^^^^^ Use this command to benchmark the latency of the Llama 3.1 8B model on one GPU with the ``float16`` data type. .. code-block:: ./vllm_benchmark_report.sh -s latency -m meta-llama/Meta-Llama-3.1-8B-Instruct -g 1 -d float16 Find the latency report at: - ``./reports_float16/summary/Meta-Llama-3.1-8B-Instruct_latency_report.csv`` Throughput benchmark example ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use this command to benchmark the throughput of the Llama 3.1 8B model on one GPU with the ``float16`` and ``float8`` data types. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m meta-llama/Meta-Llama-3.1-8B-Instruct -g 1 -d float16 Find the throughput reports at: - ``./reports_float16/summary/Meta-Llama-3.1-8B-Instruct_throughput_report.csv`` .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time Further reading =============== - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the unified ROCm Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: The `ROCm vLLM Docker `_ image offers a prebuilt, optimized environment designed for validating large language model (LLM) inference performance on the AMD Instinct™ MI300X GPU. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the MI300X GPU and includes the following components: * `ROCm 6.2.1 `_ * `vLLM 0.6.4 `_ * `PyTorch 2.5.0 `_ * Tuning files (in CSV format) With this Docker image, you can quickly validate the expected inference performance numbers on the MI300X GPU. This topic also provides tips on optimizing performance with popular AI models. .. hlist:: :columns: 6 * Llama 3.1 8B * Llama 3.1 70B * Llama 3.1 405B * Llama 2 7B * Llama 2 70B * Mixtral 8x7B * Mixtral 8x22B * Mixtral 7B * Qwen2 7B * Qwen2 72B * JAIS 13B * JAIS 30B .. _vllm-benchmark-vllm: .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. Getting started =============== Use the following procedures to reproduce the benchmark results on an MI300X GPU with the prebuilt vLLM Docker image. .. _vllm-benchmark-get-started: 1. Disable NUMA auto-balancing. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 2. Download the :ref:`ROCm vLLM Docker image `. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4 Once setup is complete, you can choose between two options to reproduce the benchmark results: - :ref:`MAD-integrated benchmarking ` - :ref:`Standalone benchmarking ` .. _vllm-benchmark-mad-v064: MAD-integrated benchmarking =========================== Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run a performance benchmark test of the Llama 3.1 8B model on one GPU with ``float16`` data type in the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags pyt_vllm_llama-3.1-8b --keep-model-dir --live-output --timeout 28800 ROCm MAD launches a Docker container with the name ``container_ci-pyt_vllm_llama-3.1-8b``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_float16/``. Although the following models are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. Refer to the :ref:`Standalone benchmarking ` section. Available models ---------------- .. hlist:: :columns: 3 * ``pyt_vllm_llama-3.1-8b`` * ``pyt_vllm_llama-3.1-70b`` * ``pyt_vllm_llama-3.1-405b`` * ``pyt_vllm_llama-2-7b`` * ``pyt_vllm_llama-2-70b`` * ``pyt_vllm_mixtral-8x7b`` * ``pyt_vllm_mixtral-8x22b`` * ``pyt_vllm_mistral-7b`` * ``pyt_vllm_qwen2-7b`` * ``pyt_vllm_qwen2-72b`` * ``pyt_vllm_jais-13b`` * ``pyt_vllm_jais-30b`` * ``pyt_vllm_llama-3.1-8b_fp8`` * ``pyt_vllm_llama-3.1-70b_fp8`` * ``pyt_vllm_llama-3.1-405b_fp8`` * ``pyt_vllm_mixtral-8x7b_fp8`` * ``pyt_vllm_mixtral-8x22b_fp8`` .. _vllm-benchmark-standalone-v064: Standalone benchmarking ======================= You can run the vLLM benchmark tool independently by starting the :ref:`Docker container ` as shown in the following snippet. .. code-block:: docker pull rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4 docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name vllm_v0.6.4 rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4 In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm Command ------- To start the benchmark, use the following command with the appropriate options. See :ref:`Options ` for the list of options and their descriptions. .. code-block:: shell ./vllm_benchmark_report.sh -s $test_option -m $model_repo -g $num_gpu -d $datatype See the :ref:`examples ` for more information. .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: shell OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. _vllm-benchmark-standalone-v064-options: Options ------- .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$model_repo`` - ``meta-llama/Meta-Llama-3.1-8B-Instruct`` - Llama 3.1 8B * - (``float16``) - ``meta-llama/Meta-Llama-3.1-70B-Instruct`` - Llama 3.1 70B * - - ``meta-llama/Meta-Llama-3.1-405B-Instruct`` - Llama 3.1 405B * - - ``meta-llama/Llama-2-7b-chat-hf`` - Llama 2 7B * - - ``meta-llama/Llama-2-70b-chat-hf`` - Llama 2 70B * - - ``mistralai/Mixtral-8x7B-Instruct-v0.1`` - Mixtral 8x7B * - - ``mistralai/Mixtral-8x22B-Instruct-v0.1`` - Mixtral 8x22B * - - ``mistralai/Mistral-7B-Instruct-v0.3`` - Mixtral 7B * - - ``Qwen/Qwen2-7B-Instruct`` - Qwen2 7B * - - ``Qwen/Qwen2-72B-Instruct`` - Qwen2 72B * - - ``core42/jais-13b-chat`` - JAIS 13B * - - ``core42/jais-30b-chat-v3`` - JAIS 30B * - ``$model_repo`` - ``amd/Meta-Llama-3.1-8B-Instruct-FP8-KV`` - Llama 3.1 8B * - (``float8``) - ``amd/Meta-Llama-3.1-70B-Instruct-FP8-KV`` - Llama 3.1 70B * - - ``amd/Meta-Llama-3.1-405B-Instruct-FP8-KV`` - Llama 3.1 405B * - - ``amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV`` - Mixtral 8x7B * - - ``amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV`` - Mixtral 8x22B * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. _vllm-benchmark-run-benchmark-v064: Running the benchmark on the MI300X GPU ----------------------------------------------- Here are some examples of running the benchmark with various options. See :ref:`Options ` for the list of options and their descriptions. Example 1: latency benchmark ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use this command to benchmark the latency of the Llama 3.1 8B model on one GPU with the ``float16`` and ``float8`` data types. .. code-block:: ./vllm_benchmark_report.sh -s latency -m meta-llama/Meta-Llama-3.1-8B-Instruct -g 1 -d float16 ./vllm_benchmark_report.sh -s latency -m amd/Meta-Llama-3.1-8B-Instruct-FP8-KV -g 1 -d float8 Find the latency reports at: - ``./reports_float16/summary/Meta-Llama-3.1-8B-Instruct_latency_report.csv`` - ``./reports_float8/summary/Meta-Llama-3.1-8B-Instruct-FP8-KV_latency_report.csv`` Example 2: throughput benchmark ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use this command to benchmark the throughput of the Llama 3.1 8B model on one GPU with the ``float16`` and ``float8`` data types. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m meta-llama/Meta-Llama-3.1-8B-Instruct -g 1 -d float16 ./vllm_benchmark_report.sh -s throughput -m amd/Meta-Llama-3.1-8B-Instruct-FP8-KV -g 1 -d float8 Find the throughput reports at: - ``./reports_float16/summary/Meta-Llama-3.1-8B-Instruct_throughput_report.csv`` - ``./reports_float8/summary/Meta-Llama-3.1-8B-Instruct-FP8-KV_throughput_report.csv`` .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time Further reading =============== - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate *********************************************************** LLM inference performance validation on AMD Instinct MI300X *********************************************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: The `ROCm vLLM Docker `_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on the AMD Instinct™ MI300X GPU. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the MI300X GPU and includes the following components: * `ROCm 6.3.1 `_ * `vLLM 0.6.6 `_ * `PyTorch 2.7.0 (2.7.0a0+git3a58512) `_ With this Docker image, you can quickly validate the expected inference performance numbers for the MI300X GPU. This topic also provides tips on optimizing performance with popular AI models. For more information, see the lists of :ref:`available models for MAD-integrated benchmarking ` and :ref:`standalone benchmarking `. .. _vllm-benchmark-vllm: .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. Getting started =============== Use the following procedures to reproduce the benchmark results on an MI300X GPU with the prebuilt vLLM Docker image. .. _vllm-benchmark-get-started: 1. Disable NUMA auto-balancing. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 2. Download the :ref:`ROCm vLLM Docker image `. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6 Once the setup is complete, choose between two options to reproduce the benchmark results: - :ref:`MAD-integrated benchmarking ` - :ref:`Standalone benchmarking ` .. _vllm-benchmark-mad-v066: MAD-integrated benchmarking =========================== Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run a performance benchmark test of the Llama 3.1 8B model on one GPU with ``float16`` data type in the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags pyt_vllm_llama-3.1-8b --keep-model-dir --live-output --timeout 28800 ROCm MAD launches a Docker container with the name ``container_ci-pyt_vllm_llama-3.1-8b``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_float16/``. Although the following models are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. Refer to the :ref:`Standalone benchmarking ` section. .. _vllm-benchmark-mad-v066-models: Available models ---------------- .. list-table:: :header-rows: 1 :widths: 2, 3 * - Model name - Tag * - `Llama 3.1 8B `_ - ``pyt_vllm_llama-3.1-8b`` * - `Llama 3.1 70B `_ - ``pyt_vllm_llama-3.1-70b`` * - `Llama 3.1 405B `_ - ``pyt_vllm_llama-3.1-405b`` * - `Llama 3.2 11B Vision `_ - ``pyt_vllm_llama-3.2-11b-vision-instruct`` * - `Llama 2 7B `__ - ``pyt_vllm_llama-2-7b`` * - `Llama 2 70B `__ - ``pyt_vllm_llama-2-70b`` * - `Mixtral MoE 8x7B `_ - ``pyt_vllm_mixtral-8x7b`` * - `Mixtral MoE 8x22B `_ - ``pyt_vllm_mixtral-8x22b`` * - `Mistral 7B `_ - ``pyt_vllm_mistral-7b`` * - `Qwen2 7B `_ - ``pyt_vllm_qwen2-7b`` * - `Qwen2 72B `_ - ``pyt_vllm_qwen2-72b`` * - `JAIS 13B `_ - ``pyt_vllm_jais-13b`` * - `JAIS 30B `_ - ``pyt_vllm_jais-30b`` * - `DBRX Instruct `_ - ``pyt_vllm_dbrx-instruct`` * - `Gemma 2 27B `_ - ``pyt_vllm_gemma-2-27b`` * - `C4AI Command R+ 08-2024 `_ - ``pyt_vllm_c4ai-command-r-plus-08-2024`` * - `DeepSeek MoE 16B `_ - ``pyt_vllm_deepseek-moe-16b-chat`` * - `Llama 3.1 70B FP8 `_ - ``pyt_vllm_llama-3.1-70b_fp8`` * - `Llama 3.1 405B FP8 `_ - ``pyt_vllm_llama-3.1-405b_fp8`` * - `Mixtral MoE 8x7B FP8 `_ - ``pyt_vllm_mixtral-8x7b_fp8`` * - `Mixtral MoE 8x22B FP8 `_ - ``pyt_vllm_mixtral-8x22b_fp8`` * - `Mistral 7B FP8 `_ - ``pyt_vllm_mistral-7b_fp8`` * - `DBRX Instruct FP8 `_ - ``pyt_vllm_dbrx_fp8`` * - `C4AI Command R+ 08-2024 FP8 `_ - ``pyt_vllm_command-r-plus_fp8`` .. _vllm-benchmark-standalone-v066: Standalone benchmarking ======================= You can run the vLLM benchmark tool independently by starting the :ref:`Docker container ` as shown in the following snippet. .. code-block:: docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6 docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name vllm_v0.6.6 rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6 In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm Command ------- To start the benchmark, use the following command with the appropriate options. See :ref:`Options ` for the list of options and their descriptions. .. code-block:: shell ./vllm_benchmark_report.sh -s $test_option -m $model_repo -g $num_gpu -d $datatype See the :ref:`examples ` for more information. .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: shell OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. _vllm-benchmark-standalone-v066-options: Options and available models ---------------------------- .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$model_repo`` - ``meta-llama/Llama-3.1-8B-Instruct`` - `Llama 3.1 8B `_ * - (``float16``) - ``meta-llama/Llama-3.1-70B-Instruct`` - `Llama 3.1 70B `_ * - - ``meta-llama/Llama-3.1-405B-Instruct`` - `Llama 3.1 405B `_ * - - ``meta-llama/Llama-3.2-11B-Vision-Instruct`` - `Llama 3.2 11B Vision `_ * - - ``meta-llama/Llama-2-7b-chat-hf`` - `Llama 2 7B `__ * - - ``meta-llama/Llama-2-70b-chat-hf`` - `Llama 2 70B `__ * - - ``mistralai/Mixtral-8x7B-Instruct-v0.1`` - `Mixtral MoE 8x7B `_ * - - ``mistralai/Mixtral-8x22B-Instruct-v0.1`` - `Mixtral MoE 8x22B `_ * - - ``mistralai/Mistral-7B-Instruct-v0.3`` - `Mistral 7B `_ * - - ``Qwen/Qwen2-7B-Instruct`` - `Qwen2 7B `_ * - - ``Qwen/Qwen2-72B-Instruct`` - `Qwen2 72B `_ * - - ``core42/jais-13b-chat`` - `JAIS 13B `_ * - - ``core42/jais-30b-chat-v3`` - `JAIS 30B `_ * - - ``databricks/dbrx-instruct`` - `DBRX Instruct `_ * - - ``google/gemma-2-27b`` - `Gemma 2 27B `_ * - - ``CohereForAI/c4ai-command-r-plus-08-2024`` - `C4AI Command R+ 08-2024 `_ * - - ``deepseek-ai/deepseek-moe-16b-chat`` - `DeepSeek MoE 16B `_ * - ``$model_repo`` - ``amd/Llama-3.1-70B-Instruct-FP8-KV`` - `Llama 3.1 70B FP8 `_ * - (``float8``) - ``amd/Llama-3.1-405B-Instruct-FP8-KV`` - `Llama 3.1 405B FP8 `_ * - - ``amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV`` - `Mixtral MoE 8x7B FP8 `_ * - - ``amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV`` - `Mixtral MoE 8x22B FP8 `_ * - - ``amd/Mistral-7B-v0.1-FP8-KV`` - `Mistral 7B FP8 `_ * - - ``amd/dbrx-instruct-FP8-KV`` - `DBRX Instruct FP8 `_ * - - ``amd/c4ai-command-r-plus-FP8-KV`` - `C4AI Command R+ 08-2024 FP8 `_ * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. _vllm-benchmark-run-benchmark-v066: Running the benchmark on the MI300X GPU ----------------------------------------------- Here are some examples of running the benchmark with various options. See :ref:`Options ` for the list of options and their descriptions. Example 1: latency benchmark ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use this command to benchmark the latency of the Llama 3.1 70B model on eight GPUs with the ``float16`` and ``float8`` data types. .. code-block:: ./vllm_benchmark_report.sh -s latency -m meta-llama/Llama-3.1-70B-Instruct -g 8 -d float16 ./vllm_benchmark_report.sh -s latency -m amd/Llama-3.1-70B-Instruct-FP8-KV -g 8 -d float8 Find the latency reports at: - ``./reports_float16/summary/Llama-3.1-70B-Instruct_latency_report.csv`` - ``./reports_float8/summary/Llama-3.1-70B-Instruct-FP8-KV_latency_report.csv`` Example 2: throughput benchmark ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use this command to benchmark the throughput of the Llama 3.1 70B model on eight GPUs with the ``float16`` and ``float8`` data types. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m meta-llama/Llama-3.1-70B-Instruct -g 8 -d float16 ./vllm_benchmark_report.sh -s throughput -m amd/Llama-3.1-70B-Instruct-FP8-KV -g 8 -d float8 Find the throughput reports at: - ``./reports_float16/summary/Llama-3.1-70B-Instruct_throughput_report.csv`` - ``./reports_float8/summary/Llama-3.1-70B-Instruct-FP8-KV_throughput_report.csv`` .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time Further reading =============== - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.7.3_20250325-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPU. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: * `ROCm {{ unified_docker.rocm_version }} `_ * `vLLM {{ unified_docker.vllm_version }} `_ * `PyTorch {{ unified_docker.pytorch_version }} `_ * `hipBLASLt {{ unified_docker.hipblaslt_version }} `_ With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. .. _vllm-benchmark-available-models-v073: Available models ================ .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-v073: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the :doc:`latest version of this inference benchmarking environment <../vllm>`. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. Advanced features and known issues ================================== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. Getting started =============== Use the following procedures to reproduce the benchmark results on an MI300X GPU with the prebuilt vLLM Docker image. .. _vllm-benchmark-get-started: 1. Disable NUMA auto-balancing. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 2. Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-v073: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. .. tab-item:: Standalone benchmarking Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: docker pull {{ unified_docker.pull_tag }} docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name test {{ unified_docker.pull_tag }} In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm To start the benchmark, use the following command with the appropriate options. .. code-block:: ./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}} .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token Here are some examples of running the benchmark with various options. * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with the :literal:`{{model.precision}}` data type. .. code-block:: ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to throughput the latency of the {{model.model}} model on eight GPUs with the :literal:`{{model.precision}}` data type. .. code-block:: shell ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. _vllm-benchmark-unified-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.8.3_20250415-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: * `ROCm {{ unified_docker.rocm_version }} `_ * `vLLM {{ unified_docker.vllm_version }} `_ * `PyTorch {{ unified_docker.pytorch_version }} `_ * `hipBLASLt {{ unified_docker.hipblaslt_version }} `_ With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. .. _vllm-benchmark-available-models-v083: Supported models ================ .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-v083: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the :doc:`latest version of this inference benchmarking environment <../vllm>`. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. Advanced features and known issues ================================== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, edit the default run behavior in the ``models.json`` configuration before running inference -- update the model's run ``args`` by changing ``--tunableop off`` to ``--tunableop on``. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: docker pull {{ unified_docker.pull_tag }} docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name test {{ unified_docker.pull_tag }} In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm To start the benchmark, use the following command with the appropriate options. .. code-block:: ./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}} .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token Here are some examples of running the benchmark with various options. * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.8.5_20250513-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: * `ROCm {{ unified_docker.rocm_version }} `_ * `vLLM {{ unified_docker.vllm_version }} `_ * `PyTorch {{ unified_docker.pytorch_version }} `_ * `hipBLASLt {{ unified_docker.hipblaslt_version }} `_ With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. .. _vllm-benchmark-available-models-v085-20250513: Supported models ================ The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model group
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-v085-20250513: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the :doc:`latest version of this inference benchmarking environment <../vllm>`. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. Advanced features and known issues ================================== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, edit the default run behavior in the ``models.json`` configuration before running inference -- update the model's run ``args`` by changing ``--tunableop off`` to ``--tunableop on``. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: docker pull {{ unified_docker.pull_tag }} docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name test {{ unified_docker.pull_tag }} In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm To start the benchmark, use the following command with the appropriate options. .. code-block:: ./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}} .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token Here are some examples of running the benchmark with various options. * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.8.5_20250521-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: * `ROCm {{ unified_docker.rocm_version }} `_ * `vLLM {{ unified_docker.vllm_version }} `_ * `PyTorch {{ unified_docker.pytorch_version }} `_ * `hipBLASLt {{ unified_docker.hipblaslt_version }} `_ With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. .. _vllm-benchmark-available-models-v085-20250521: Supported models ================ The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model group
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-v085-20250521: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. Advanced features and known issues ================================== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, edit the default run behavior in the ``models.json`` configuration before running inference -- update the model's run ``args`` by changing ``--tunableop off`` to ``--tunableop on``. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: docker pull {{ unified_docker.pull_tag }} docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name test {{ unified_docker.pull_tag }} In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm To start the benchmark, use the following command with the appropriate options. .. code-block:: ./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}} .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token Here are some examples of running the benchmark with various options. * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.9.0.1_20250605-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: * `ROCm {{ unified_docker.rocm_version }} `_ * `vLLM {{ unified_docker.vllm_version }} `_ * `PyTorch {{ unified_docker.pytorch_version }} `_ * `hipBLASLt {{ unified_docker.hipblaslt_version }} `_ With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. .. _vllm-benchmark-available-models-v0901-20250605: Supported models ================ The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model group
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-v0901-20250605: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. Advanced features and known issues ================================== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, edit the default run behavior in the ``models.json`` configuration before running inference -- update the model's run ``args`` by changing ``--tunableop off`` to ``--tunableop on``. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: docker pull {{ unified_docker.pull_tag }} docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name test {{ unified_docker.pull_tag }} In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm To start the benchmark, use the following command with the appropriate options. .. code-block:: ./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}} .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token Here are some examples of running the benchmark with various options. * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X GPUs, see `AMD Instinct MI300X system optimization `_ - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker-702: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.9.1_20250702-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: * `ROCm {{ unified_docker.rocm_version }} `_ * `vLLM {{ unified_docker.vllm_version }} `_ * `PyTorch {{ unified_docker.pytorch_version }} `_ * `hipBLASLt {{ unified_docker.hipblaslt_version }} `_ With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. .. _vllm-benchmark-available-models-20250702: Supported models ================ The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model group
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-702: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-20250702: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. Advanced features and known issues ================================== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-702: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, edit the default run behavior in the ``models.json`` configuration before running inference -- update the model's run ``args`` by changing ``--tunableop off`` to ``--tunableop on``. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: docker pull {{ unified_docker.pull_tag }} docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name test {{ unified_docker.pull_tag }} In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm To start the benchmark, use the following command with the appropriate options. .. code-block:: ./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}} .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type .. note:: The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token Here are some examples of running the benchmark with various options. * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with :literal`{{model.precision}}` precision. .. code-block:: ./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell ./vllm_benchmark_report.sh -s throughput -m {{model.model_repo}} -g 8 -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_ - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`../vllm` for the latest version. .. _vllm-benchmark-unified-docker-715: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.9.1_20250715-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series GPUs and includes the following components: .. list-table:: :header-rows: 1 * - Software component - Version * - `ROCm `__ - {{ unified_docker.rocm_version }} * - `vLLM `__ - {{ unified_docker.vllm_version }} * - `PyTorch `__ - {{ unified_docker.pytorch_version }} * - `hipBLASLt `__ - {{ unified_docker.hipblaslt_version }} With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for MI300X Series GPUs. What's new ========== The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release `. * The ``--compilation-config-parameter`` is no longer required as its options are now enabled by default. This parameter has been removed from the benchmarking script. * Resolved Llama 3.1 405 B custom all-reduce issue, eliminating the need for ``--disable-custom-all-reduce``. This parameter has been removed from the benchmarking script. * Fixed a ``+rms_norm`` custom kernel issue. * Added quick reduce functionality. Set ``VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=FP`` to enable; supported modes are ``FP``, ``INT8``, ``INT6``, ``INT4``. * Implemented a workaround to potentially mitigate GPU crashes experienced with the Command R+ model, pending a driver fix. Supported models ================ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.9.1_20250715-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} .. _vllm-benchmark-available-models-715: The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model group
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-715: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% endfor %} {% endfor %} .. note:: vLLM is a toolkit and library for LLM inference and serving. AMD implements high-performance custom kernels and modules in vLLM to enhance performance. See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for more information. .. _vllm-benchmark-performance-measurements-715: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.9.1_20250715-benchmark-models.yaml {% set unified_docker = data.vllm_benchmark.unified_docker.latest %} {% set model_groups = data.vllm_benchmark.model_groups %} Pull the Docker image ===================== Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-715: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``. Although the :ref:`available models ` are preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required scripts 1. Run the vLLM benchmark tool independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ unified_docker.pull_tag }} 2. In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/vllm``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/vllm 3. To start the benchmark, use the following command with the appropriate options. .. dropdown:: Benchmark options :open: .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 1 or 8 - Number of GPUs * - ``$datatype`` - ``float16`` or ``float8`` - Data type The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. Command: .. code-block:: ./vllm_benchmark_report.sh \ -s $test_option \ -m {{model.model_repo}} \ -g $num_gpu \ -d {{model.precision}} .. note:: For best performance, it's recommend to run with ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1``. If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. rubric:: Benchmarking examples Here are some examples of running the benchmark with various options: * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: ./vllm_benchmark_report.sh \ -s latency \ -m {{model.model_repo}} \ -g 8 \ -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with :literal:`{{model.precision}}` precision. .. code-block:: shell ./vllm_benchmark_report.sh \ -s throughput \ -m {{model.model_repo}} \ -g 8 \ -d {{model.precision}} Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Advanced usage ============== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. Reproducing the Docker image ---------------------------- To reproduce this ROCm/vLLM Docker image release, follow these steps: 1. Clone the `vLLM repository `__. .. code-block:: shell git clone https://github.com/ROCm/vllm.git 2. Checkout the specific release commit. .. code-block:: shell cd vllm git checkout b432b7a285aa0dcb9677380936ffa74931bb6d6f 3. Build the Docker image. Replace ``vllm-rocm`` with your desired image tag. .. code-block:: shell docker build -f docker/Dockerfile.rocm -t vllm-rocm . Known issues and workarounds ============================ AITER does not support FP8 KV cache yet. Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- :orphan: ************************************************** vLLM inference performance testing version history ************************************************** This table lists previous versions of the ROCm vLLM inference Docker image for inference performance testing. For detailed information about available models for benchmarking, see the version-specific documentation. You can find tagged previous releases of the ``ROCm/vllm`` Docker image on `Docker Hub `__. .. list-table:: :header-rows: 1 * - Docker image tag - Components - Resources * - ``rocm/vllm:rocm7.0.0_vllm_0.11.2_20251210`` - * ROCm 7.0.0 * vLLM 0.11.2 * PyTorch 2.9.0 - * :doc:`Documentation <../vllm>` * `Docker Hub `__ * - ``rocm/vllm:rocm7.0.0_vllm_0.11.1_20251103`` - * ROCm 7.0.0 * vLLM 0.11.1 * PyTorch 2.9.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm7.0.0_vllm_0.10.2_20251006`` - * ROCm 7.0.0 * vLLM 0.10.2 * PyTorch 2.9.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.4.1_vllm_0.10.1_20250909`` - * ROCm 6.4.1 * vLLM 0.10.1 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.4.1_vllm_0.10.0_20250812`` - * ROCm 6.4.1 * vLLM 0.10.0 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.4.1_vllm_0.9.1_20250715`` - * ROCm 6.4.1 * vLLM 0.9.1 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.4.1_vllm_0.9.1_20250702`` - * ROCm 6.4.1 * vLLM 0.9.1 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.4.1_vllm_0.9.0.1_20250605`` - * ROCm 6.4.1 * vLLM 0.9.0.1 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.3.1_vllm_0.8.5_20250521`` - * ROCm 6.3.1 * 0.8.5 vLLM (0.8.6.dev) * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.3.1_vllm_0.8.5_20250513`` - * ROCm 6.3.1 * vLLM 0.8.5 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.3.1_instinct_vllm0.8.3_20250415`` - * ROCm 6.3.1 * vLLM 0.8.3 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.3.1_instinct_vllm0.7.3_20250325`` - * ROCm 6.3.1 * vLLM 0.7.3 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6`` - * ROCm 6.3.1 * vLLM 0.6.6 * PyTorch 2.7.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4`` - * ROCm 6.2.1 * vLLM 0.6.4 * PyTorch 2.5.0 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/vllm:rocm6.2_mi300_ubuntu22.04_py3.9_vllm_7c5fd50`` - * ROCm 6.2.0 * vLLM 0.4.3 * PyTorch 2.4.0 - * :doc:`Documentation ` * `Docker Hub `__ --- :orphan: .. meta:: :description: Learn to validate diffusion model video generation on MI300X, MI350X and MI355X accelerators using prebuilt and optimized docker images. :keywords: xDiT, diffusion, video, video generation, image, image generation, validate, benchmark ************************ xDiT diffusion inference ************************ .. _xdit-video-diffusion-2510: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.10-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker %} {% set model_groups = data.xdit_diffusion_inference.model_groups%} The `rocm/pytorch-xdit <{{ docker.docker_hub_url }}>`_ Docker image offers a prebuilt, optimized inference environment based on `xDiT `_ for benchmarking diffusion-based video and image generation on AMD Instinct MI355X, MI350X (gfx950), MI325X, and MI300X (gfx942) GPUs. This image is based on ROCm {{docker.ROCm}} preview release via `TheRock `_ and includes the following software components: .. tab-set:: .. tab-item:: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} Follow this guide to pull the required image, spin up a container, download the model, and run a benchmark. For preview and development releases, see `amdsiloai/pytorch-xdit `_. What's new ========== - Initial ROCm-enabled xDiT Docker release for diffusion inference. - Supported architectures: gfx942 and gfx950 (AMD Instinct™ MI300X, MI325X, MI350X, and MI355X). - Supported workloads: Wan 2.1, Wan 2.2, HunyuanVideo, and Flux models. .. _xdit-video-diffusion-supported-models-2510: Supported models ================ The following models are supported for inference performance benchmarking. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.10-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker %} {% set model_groups = data.xdit_diffusion_inference.model_groups%} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length == 1 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
{% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} .. note:: To learn more about your specific model see the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ or visit the `GitHub page <{{ model.github }}>`__. Note that some models require access authorization before use via an external license agreement through a third party. {% endfor %} {% endfor %} System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the `System validation and optimization `__ guide to properly configure your system settings before starting. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.10-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker %} For this tutorial, it's recommended to use the latest ``{{ docker.pull_tag }}`` Docker image. Pull the image using the following command: .. code-block:: shell docker pull {{ docker.pull_tag }} Validate and benchmark ====================== Once the image has been downloaded you can follow these steps to run benchmarks and generate outputs. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.10-inference-models.yaml {% set model_groups = data.xdit_diffusion_inference.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} The following commands are written for {{ model.model }}. See :ref:`xdit-video-diffusion-supported-models-2510` to switch to another available model. {% endfor %} {% endfor %} .. _xdit-video-diffusion-setup-2510: Prepare the model ----------------- .. note:: If you're using ROCm MAD to :ref:`run your model `, you can skip this section. MAD will handle starting the container and downloading required models inside the container. You can either use an existing Hugging Face cache or download the model fresh inside the container. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.10-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker %} {% set model_groups = data.xdit_diffusion_inference.model_groups%} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: Option 1: Use existing Hugging Face cache If you already have models downloaded on your host system, you can mount your existing cache. 1. Set your Hugging Face cache location. .. code-block:: shell export HF_HOME=/your/hf_cache/location 2. Download the model (if not already cached). .. code-block:: shell huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} 3. Launch the container with mounted cache. .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ -e HF_HOME=/app/huggingface_models \ -v $HF_HOME:/app/huggingface_models \ {{ docker.pull_tag }} .. tab-item:: Option 2: Download inside container If you prefer to keep the container self-contained or don't have an existing cache. 1. Launch the container .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ {{ docker.pull_tag }} 2. Inside the container, set the Hugging Face cache location and download the model. .. code-block:: shell export HF_HOME=/app/huggingface_models huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} .. warning:: Models will be downloaded to the container's filesystem and will be lost when the container is removed unless you persist the data with a volume. {% endfor %} {% endfor %} .. _xdit-video-diffusion-run-2510: Run inference ============= You can benchmark models through `MAD `__-integrated automation or standalone torchrun commands. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.10-inference-models.yaml {% set model_groups = data.xdit_diffusion_inference.model_groups%} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. .. tab-item:: Standalone benchmarking To run the benchmarks for {{ model.model }}, use the following command: .. code-block:: shell {% if model.model == "Hunyuan Video" %} cd /app/Hunyuanvideo mkdir results torchrun --nproc_per_node=8 run.py \ --model tencent/HunyuanVideo \ --prompt "In the large cage, two puppies were wagging their tails at each other." \ --height 720 --width 1280 --num_frames 129 \ --num_inference_steps 50 --warmup_steps 1 --n_repeats 1 \ --ulysses_degree 8 \ --enable_tiling --enable_slicing \ --use_torch_compile \ --bench_output results {% endif %} {% if model.model == "Wan2.1" %} cd Wan2.1 mkdir results torchrun --nproc_per_node=8 run.py \ --task i2v-14B \ --size 720*1280 --frame_num 81 \ --ckpt_dir "${HF_HOME}/hub/models--Wan-AI--Wan2.1-I2V-14B-720P/snapshots/8823af45fcc58a8aa999a54b04be9abc7d2aac98/" \ --image "/app/Wan2.1/examples/i2v_input.JPG" \ --ulysses_size 8 --ring_size 1 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --benchmark_output_directory results --save_file video.mp4 --num_benchmark_steps 1 \ --offload_model 0 \ --vae_dtype bfloat16 \ --allow_tf32 \ --compile {% endif %} {% if model.model == "Wan2.2" %} cd Wan2.2 mkdir results torchrun --nproc_per_node=8 run.py \ --task i2v-A14B \ --size 720*1280 --frame_num 81 \ --ckpt_dir "${HF_HOME}/hub/models--Wan-AI--Wan2.2-I2V-A14B/snapshots/206a9ee1b7bfaaf8f7e4d81335650533490646a3/" \ --image "/app/Wan2.2/examples/i2v_input.JPG" \ --ulysses_size 8 --ring_size 1 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --benchmark_output_directory results --save_file video.mp4 --num_benchmark_steps 1 \ --offload_model 0 \ --vae_dtype bfloat16 \ --allow_tf32 \ --compile {% endif %} {% if model.model == "FLUX.1" %} cd Flux mkdir results torchrun --nproc_per_node=8 /app/Flux/run.py \ --model black-forest-labs/FLUX.1-dev \ --seed 42 \ --prompt "A small cat" \ --height 1024 \ --width 1024 \ --num_inference_steps 25 \ --max_sequence_length 256 \ --warmup_steps 5 \ --no_use_resolution_binning \ --ulysses_degree 8 \ --use_torch_compile \ --num_repetitions 1 \ --benchmark_output_directory results {% endif %} The generated video will be stored under the results directory. For the actual benchmark step runtimes, see {% if model.model == "Hunyuan Video" %}stdout.{% elif model.model in ["Wan2.1", "Wan2.2"] %}results/outputs/rank0_*.json{% elif model.model == "FLUX.1" %}results/timing.json{% endif %} {% if model.model == "FLUX.1" %}You may also use ``run_usp.py`` which implements USP without modifying the default diffusers pipeline. {% endif %} {% endfor %} {% endfor %} Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `__. Previous versions ================= See :doc:`xdit-history` to find documentation for previous releases of xDiT diffusion inference performance testing. --- :orphan: .. meta:: :description: Learn to validate diffusion model video generation on MI300X, MI350X and MI355X accelerators using prebuilt and optimized docker images. :keywords: xDiT, diffusion, video, video generation, image, image generation, validate, benchmark ************************ xDiT diffusion inference ************************ .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`/how-to/rocm-for-ai/inference/xdit-diffusion-inference` for the latest version. .. _xdit-video-diffusion-2511: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker | selectattr("version", "equalto", "v25-11") | first %} {% set model_groups = data.xdit_diffusion_inference.model_groups%} The `rocm/pytorch-xdit <{{ docker.docker_hub_url }}>`_ Docker image offers a prebuilt, optimized environment based on `xDiT `_ for benchmarking diffusion model video and image generation on gfx942 and gfx950 series (AMD Instinct™ MI300X, MI325X, MI350X, and MI355X) GPUs. The image runs ROCm **{{docker.ROCm}}** (preview) based on `TheRock `_ and includes the following components: .. dropdown:: Software components .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} Follow this guide to pull the required image, spin up a container, download the model, and run a benchmark. For preview and development releases, see `amdsiloai/pytorch-xdit `_. What's new ========== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker | selectattr("version", "equalto", "v25-11") | first %} {% set model_groups = data.xdit_diffusion_inference.model_groups%} {% for item in docker.whats_new %} * {{ item }} {% endfor %} .. _xdit-video-diffusion-supported-models-2511: Supported models ================ The following models are supported for inference performance benchmarking. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker | selectattr("version", "equalto", "v25-11") | first %} {% set model_groups = data.xdit_diffusion_inference.model_groups %} {# Create a lookup for supported models #} {% set supported_lookup = {} %} {% for supported in docker.supported_models %} {% set _ = supported_lookup.update({supported.group: supported.models}) %} {% endfor %} .. raw:: html
Model
{% for model_group in model_groups %} {% if model_group.group in supported_lookup %}
{{ model_group.group }}
{% endif %} {% endfor %}
Variant
{% for model_group in model_groups %} {% if model_group.group in supported_lookup %} {% set supported_models = supported_lookup[model_group.group] %} {% set models = model_group.models %} {% for model in models %} {% if model.model in supported_models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endif %} {% endfor %} {% endif %} {% endfor %}
{% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.page_tag }} .. note:: To learn more about your specific model see the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ or visit the `GitHub page <{{ model.github }}>`__. Note that some models require access authorization before use via an external license agreement through a third party. {% endfor %} {% endfor %} System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker | selectattr("version", "equalto", "v25-11") | first %} For this tutorial, it's recommended to use the latest ``{{ docker.pull_tag }}`` Docker image. Pull the image using the following command: .. code-block:: shell docker pull {{ docker.pull_tag }} Validate and benchmark ====================== Once the image has been downloaded you can follow these steps to run benchmarks and generate outputs. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.page_tag}} The following commands are written for {{ model.model }}. See :ref:`xdit-video-diffusion-supported-models-2511` to switch to another available model. {% endfor %} {% endfor %} Choose your setup method ------------------------ You can either use an existing Hugging Face cache or download the model fresh inside the container. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% set docker = data.xdit_diffusion_inference.docker | selectattr("version", "equalto", "v25-11") | first %} {% set model_groups = data.xdit_diffusion_inference.model_groups%} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.page_tag}} .. tab-set:: .. tab-item:: Option 1: Use existing Hugging Face cache If you already have models downloaded on your host system, you can mount your existing cache. 1. Set your Hugging Face cache location. .. code-block:: shell export HF_HOME=/your/hf_cache/location 2. Download the model (if not already cached). .. code-block:: shell huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} 3. Launch the container with mounted cache. .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ -e HF_HOME=/app/huggingface_models \ -v $HF_HOME:/app/huggingface_models \ {{ docker.pull_tag }} .. tab-item:: Option 2: Download inside container If you prefer to keep the container self-contained or don't have an existing cache. 1. Launch the container .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ {{ docker.pull_tag }} 2. Inside the container, set the Hugging Face cache location and download the model. .. code-block:: shell export HF_HOME=/app/huggingface_models huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} .. warning:: Models will be downloaded to the container's filesystem and will be lost when the container is removed unless you persist the data with a volume. {% endfor %} {% endfor %} Run inference ============= .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.11-inference-models.yaml {% set model_groups = data.xdit_diffusion_inference.model_groups%} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.page_tag }} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. .. tab-item:: Standalone benchmarking To run the benchmarks for {{ model.model }}, use the following command: .. code-block:: shell {% if model.model == "Hunyuan Video" %} cd /app/Hunyuanvideo mkdir results torchrun --nproc_per_node=8 run.py \ --model tencent/HunyuanVideo \ --prompt "In the large cage, two puppies were wagging their tails at each other." \ --height 720 --width 1280 --num_frames 129 \ --num_inference_steps 50 --warmup_steps 1 --n_repeats 1 \ --ulysses_degree 8 \ --enable_tiling --enable_slicing \ --use_torch_compile \ --bench_output results {% endif %} {% if model.model == "Wan2.1" %} cd Wan2.1 mkdir results torchrun --nproc_per_node=8 run.py \ --task i2v-14B \ --size 720*1280 --frame_num 81 \ --ckpt_dir "${HF_HOME}/hub/models--Wan-AI--Wan2.1-I2V-14B-720P/snapshots/8823af45fcc58a8aa999a54b04be9abc7d2aac98/" \ --image "/app/Wan2.1/examples/i2v_input.JPG" \ --ulysses_size 8 --ring_size 1 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --benchmark_output_directory results --save_file video.mp4 --num_benchmark_steps 1 \ --offload_model 0 \ --vae_dtype bfloat16 \ --allow_tf32 \ --compile {% endif %} {% if model.model == "Wan2.2" %} cd Wan2.2 mkdir results torchrun --nproc_per_node=8 run.py \ --task i2v-A14B \ --size 720*1280 --frame_num 81 \ --ckpt_dir "${HF_HOME}/hub/models--Wan-AI--Wan2.2-I2V-A14B/snapshots/206a9ee1b7bfaaf8f7e4d81335650533490646a3/" \ --image "/app/Wan2.2/examples/i2v_input.JPG" \ --ulysses_size 8 --ring_size 1 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --benchmark_output_directory results --save_file video.mp4 --num_benchmark_steps 1 \ --offload_model 0 \ --vae_dtype bfloat16 \ --allow_tf32 \ --compile {% endif %} {% if model.model == "FLUX.1" %} cd Flux mkdir results torchrun --nproc_per_node=8 /app/Flux/run.py \ --model black-forest-labs/FLUX.1-dev \ --seed 42 \ --prompt "A small cat" \ --height 1024 \ --width 1024 \ --num_inference_steps 25 \ --max_sequence_length 256 \ --warmup_steps 5 \ --no_use_resolution_binning \ --ulysses_degree 8 \ --use_torch_compile \ --num_repetitions 1 \ --benchmark_output_directory results {% endif %} The generated video will be stored under the results directory. For the actual benchmark step runtimes, see {% if model.model == "Hunyuan Video" %}stdout.{% elif model.model in ["Wan2.1", "Wan2.2"] %}results/outputs/rank0_*.json{% elif model.model == "FLUX.1" %}results/timing.json{% endif %} {% if model.model == "FLUX.1" %}You may also use ``run_usp.py`` which implements USP without modifying the default diffusers pipeline. {% endif %} {% endfor %} {% endfor %} Previous versions ================= See :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-history` to find documentation for previous releases of xDiT diffusion inference performance testing. --- :orphan: .. meta:: :description: Learn to validate diffusion model video generation on MI300X, MI350X and MI355X accelerators using prebuilt and optimized docker images. :keywords: xDiT, diffusion, video, video generation, image, image generation, validate, benchmark ************************ xDiT diffusion inference ************************ .. caution:: This documentation does not reflect the latest version of ROCm vLLM inference performance documentation. See :doc:`/how-to/rocm-for-ai/inference/xdit-diffusion-inference` for the latest version. .. _xdit-video-diffusion-2512: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} The `rocm/pytorch-xdit <{{ docker.docker_hub_url }}>`_ Docker image offers a prebuilt, optimized environment based on `xDiT `_ for benchmarking diffusion model video and image generation on AMD Instinct MI355X, MI350X (gfx950), MI325X, and MI300X (gfx942) GPUs. The image runs ROCm **{{docker.ROCm}}** (preview) based on `TheRock `_ and includes the following components: .. dropdown:: Software components .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_data in docker.components.items() %} * - `{{ component_name }} <{{ component_data.url }}>`_ - {{ component_data.version }} {% endfor %} Follow this guide to pull the required image, spin up a container, download the model, and run a benchmark. For preview and development releases, see `amdsiloai/pytorch-xdit `_. What's new ========== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} {% for item in docker.whats_new %} * {{ item }} {% endfor %} .. _xdit-video-diffusion-supported-models-2512: Supported models ================ The following models are supported for inference performance benchmarking. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} .. raw:: html
Model
{% for model_group in docker.supported_models %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in docker.supported_models %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
{% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{ model.js_tag }} .. note:: To learn more about your specific model see the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ or visit the `GitHub page <{{ model.github }}>`__. Note that some models require access authorization before use via an external license agreement through a third party. {% endfor %} {% endfor %} System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} For this tutorial, it's recommended to use the latest ``{{ docker.pull_tag }}`` Docker image. Pull the image using the following command: .. code-block:: shell docker pull {{ docker.pull_tag }} Validate and benchmark ====================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} Once the image has been downloaded you can follow these steps to run benchmarks and generate outputs. {% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{model.js_tag}} The following commands are written for {{ model.model }}. See :ref:`xdit-video-diffusion-supported-models` to switch to another available model. {% endfor %} {% endfor %} Choose your setup method ------------------------ You can either use an existing Hugging Face cache or download the model fresh inside the container. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} {% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{model.js_tag}} .. tab-set:: .. tab-item:: Option 1: Use existing Hugging Face cache If you already have models downloaded on your host system, you can mount your existing cache. 1. Set your Hugging Face cache location. .. code-block:: shell export HF_HOME=/your/hf_cache/location 2. Download the model (if not already cached). .. code-block:: shell huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} 3. Launch the container with mounted cache. .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ -e HF_HOME=/app/huggingface_models \ -v $HF_HOME:/app/huggingface_models \ {{ docker.pull_tag }} .. tab-item:: Option 2: Download inside container If you prefer to keep the container self-contained or don't have an existing cache. 1. Launch the container .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ {{ docker.pull_tag }} 2. Inside the container, set the Hugging Face cache location and download the model. .. code-block:: shell export HF_HOME=/app/huggingface_models huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} .. warning:: Models will be downloaded to the container's filesystem and will be lost when the container is removed unless you persist the data with a volume. {% endfor %} {% endfor %} Run inference ============= .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/xdit_25.12-inference-models.yaml {% set docker = data.docker %} {% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{ model.js_tag }} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. .. tab-item:: Standalone benchmarking To run the benchmarks for {{ model.model }}, use the following command: .. code-block:: shell {% if model.model == "Hunyuan Video" %} cd /app/Hunyuanvideo mkdir results torchrun --nproc_per_node=8 run.py \ --model {{ model.model_repo }} \ --prompt "In the large cage, two puppies were wagging their tails at each other." \ --height 720 --width 1280 --num_frames 129 \ --num_inference_steps 50 --warmup_steps 1 --n_repeats 1 \ --ulysses_degree 8 \ --enable_tiling --enable_slicing \ --use_torch_compile \ --bench_output results {% endif %} {% if model.model == "Wan2.1" %} cd Wan mkdir results torchrun --nproc_per_node=8 /app/Wan/run.py \ --task i2v \ --height 720 \ --width 1280 \ --model {{ model.model_repo }} \ --img_file_path /app/Wan/i2v_input.JPG \ --ulysses_degree 8 \ --seed 42 \ --num_frames 81 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --num_repetitions 1 \ --num_inference_steps 40 \ --use_torch_compile {% endif %} {% if model.model == "Wan2.2" %} cd Wan mkdir results torchrun --nproc_per_node=8 /app/Wan/run.py \ --task i2v \ --height 720 \ --width 1280 \ --model {{ model.model_repo }} \ --img_file_path /app/Wan/i2v_input.JPG \ --ulysses_degree 8 \ --seed 42 \ --num_frames 81 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --num_repetitions 1 \ --num_inference_steps 40 \ --use_torch_compile {% endif %} {% if model.model == "FLUX.1" %} cd Flux mkdir results torchrun --nproc_per_node=8 /app/Flux/run.py \ --model {{ model.model_repo }} \ --seed 42 \ --prompt "A small cat" \ --height 1024 \ --width 1024 \ --num_inference_steps 25 \ --max_sequence_length 256 \ --warmup_steps 5 \ --no_use_resolution_binning \ --ulysses_degree 8 \ --use_torch_compile \ --num_repetitions 50 {% endif %} {% if model.model == "stable-diffusion-3.5-large" %} cd StableDiffusion3.5 mkdir results torchrun --nproc_per_node=8 /app/StableDiffusion3.5/run.py \ --model {{ model.model_repo }} \ --num_inference_steps 28 \ --prompt "A capybara holding a sign that reads Hello World" \ --use_torch_compile \ --pipefusion_parallel_degree 4 \ --use_cfg_parallel \ --num_repetitions 50 \ --dtype torch.float16 \ --output_path results {% endif %} The generated video will be stored under the results directory. For the actual benchmark step runtimes, see {% if model.model == "Hunyuan Video" %}stdout.{% elif model.model in ["Wan2.1", "Wan2.2"] %}results/outputs/rank0_*.json{% elif model.model == "FLUX.1" %}results/timing.json{% elif model.model == "stable-diffusion-3.5-large"%}benchmark_results.csv{% endif %} {% if model.model == "FLUX.1" %}You may also use ``run_usp.py`` which implements USP without modifying the default diffusers pipeline. {% endif %} {% endfor %} {% endfor %} Previous versions ================= See :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-history` to find documentation for previous releases of xDiT diffusion inference performance testing. --- :orphan: ************************************************************ xDiT diffusion inference performance testing version history ************************************************************ This table lists previous versions of the ROCm xDiT diffusion inference performance testing environment. For detailed information about available models for benchmarking, see the version-specific documentation. .. list-table:: :header-rows: 1 * - Docker image tag - Components - Resources * - ``rocm/pytorch-xdit:v25.13`` (latest) - * TheRock 1728a81 - * :doc:`Documentation <../../xdit-diffusion-inference>` * `Docker Hub `__ * - ``rocm/pytorch-xdit:v25.12`` - * `ROCm 7.10.0 preview `__ * TheRock 3e3f834 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/pytorch-xdit:v25.11`` - * `ROCm 7.10.0 preview `__ * TheRock 3e3f834 - * :doc:`Documentation ` * `Docker Hub `__ * - ``rocm/pytorch-xdit:v25.10`` - * `ROCm 7.9.0 preview `__ * TheRock 7afbe45 - * :doc:`Documentation ` * `Docker Hub `__ --- .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm PyTorch Docker image. :keywords: model, MAD, automation, dashboarding, validate, pytorch ************************************* PyTorch inference performance testing ************************************* .. _pytorch-inference-benchmark-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/pytorch-inference-benchmark-models.yaml {% set unified_docker = data.pytorch_inference_benchmark.unified_docker.latest %} {% set model_groups = data.pytorch_inference_benchmark.model_groups %} The `ROCm PyTorch Docker `_ image offers a prebuilt, optimized environment for testing model inference performance on AMD Instinct™ MI300X Series GPUs. This guide demonstrates how to use the AMD Model Automation and Dashboarding (MAD) tool with the ROCm PyTorch container to test inference performance on various models efficiently. .. _pytorch-inference-benchmark-available-models: Supported models ================ The following models are supported for inference performance benchmarking with PyTorch and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
{% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization before use via an external license agreement through a third party. {% endfor %} {% endfor %} System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU might hang until the periodic balancing is finalized. For more information, see the :ref:`system validation steps `. .. code-block:: shell # disable automatic NUMA balancing sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' # check if NUMA balancing is disabled (returns 0 if disabled) cat /proc/sys/kernel/numa_balancing 0 To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. container:: model-doc pyt_chai1_inference Use the following command to pull the `ROCm PyTorch Docker image `__ from Docker Hub. .. code-block:: shell docker pull rocm/pytorch:rocm6.2.3_ubuntu22.04_py3.10_pytorch_release_2.3.0_triton_llvm_reg_issue .. note:: The Chai-1 benchmark uses a specifically selected Docker image using ROCm 6.2.3 and PyTorch 2.3.0 to address an accuracy issue. .. container:: model-doc pyt_clip_inference pyt_mochi_video_inference pyt_wan2.1_inference pyt_janus_pro_inference pyt_hy_video Use the following command to pull the `ROCm PyTorch Docker image `__ from Docker Hub. .. code-block:: shell docker pull rocm/pytorch:latest .. _pytorch-benchmark-get-started: Benchmarking ============ .. _pytorch-inference-benchmark-mad: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} To simplify performance testing, the ROCm Model Automation and Dashboarding (``__) project provides ready-to-use scripts and configuration. To start, clone the MAD repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the ``{{model.precision}}`` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in ``perf_{{model.mad_tag}}.csv``. {% if model.mad_tag != "pyt_janus_pro_inference" %} .. note:: For improved performance, consider enabling TunableOp. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. Although this might increase the initial training time, it can result in a performance gain. {% endif %} {% endfor %} {% endfor %} Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`../../inference-optimization/workload`. - To learn how to run LLM models from Hugging Face or your model, see :doc:`Running models from Hugging Face <../hugging-face-models>`. - To learn how to optimize inference on LLMs, see :doc:`Inference optimization <../../inference-optimization/index>`. - To learn how to fine-tune LLMs, see :doc:`Fine-tuning LLMs <../../fine-tuning/index>`. --- .. meta:: :description: SGLang multi-node disaggregated distributed inference using Mooncake :keywords: model, sglang, mooncake, disagg, disaggregated, distributed, multi-node, docker ****************************************** SGLang distributed inference with Mooncake ****************************************** As LLM inference increasingly demands handling massive models and dynamic workloads, efficient distributed inference becomes essential. Traditional co-located architectures face bottlenecks due to tightly coupled memory and compute resources, which limits scalability and flexibility. Disaggregated inference refers to the process of splitting the inference of LLMs into distinct phases. This architecture, facilitated by libraries like Mooncake, uses high-bandwidth RDMA to transfer the Key-Value (KV) cache between prefill and decode nodes. This allows for independent resource scaling and optimization, resulting in improved efficiency and throughput. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/sglang-distributed-benchmark-models.yaml {% set docker = data.dockers[0] %} `SGLang `__ is a high-performance inference and serving engine for large language models (LLMs) and vision models. The ROCm-enabled `SGLang base Docker image <{{ docker.docker_hub_url }}>`__ bundles SGLang with PyTorch, which is optimized for AMD Instinct MI300X Series GPUs. It includes the following software components: .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} The following guides on setting up and running SGLang and Mooncake for disaggregated distributed inference on a Slurm cluster using AMD Instinct MI300X Series GPUs backed by Mellanox CX-7 NICs. Prerequisites ============= Before starting, ensure you have: * A Slurm cluster with at least three nodes: one for the proxy, one for prefill (``xP``), and one for decode (``yD``). ``Nodes -> xP + yD + 1`` * A Dockerized environment with SGLang, Mooncake, etcd, and NIC drivers built in. See :ref:`sglang-disagg-inf-build-docker-image` for instructions. * A shared filesystem for storing models, scripts, and logs (cluster-specific). Supported models ================ The following models are supported for SGLang disaggregated prefill/decode inference. Some instructions, commands, and recommendations in this documentation might vary by selected model. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/sglang-distributed-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model type
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
{% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.model_repo }} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`__ to learn more about this model. Some models require access authorization prior to use through an external license agreement with a third party. {% endfor %} {% endfor %} .. _sglang-disagg-inf-build-docker-image: Build the Docker image ---------------------- Get the Dockerfile located in ``__. It uses `lmsysorg/sglang:v0.5.2rc1-rocm700-mi30x `__ as the base Docker image and installs the necessary components for Mooncake, etcd, and Mellanox network drivers. .. code-block:: shell git clone https://github.com/ROCm/MAD.git cd MAD/docker docker build \ -t sglang_disagg_pd_image \ -f sglang_disagg_inference.ubuntu.amd.Dockerfile . Benchmarking ============ The ``__ repository contains scripts to launch SGLang inference with prefill/decode disaggregation via Mooncake for supported models. * `scripts/sglang_dissag/run_xPyD_models.slurm `__ -- the main Slurm batch script to launch Docker containers on all nodes using ``sbatch`` or ``salloc``. * `scripts/sglang_dissag/sglang_disagg_server.sh `__ -- the entrypoint script that runs inside each container to start the correct service -- proxy, prefill, or decode. * `scripts/sglang_dissag/benchmark_xPyD.sh `__ -- the benchmark script to run the GSM8K accuracy benchmark and the SGLang benchmarking tool for performance measurement. * `scripts/sglang_dissag/benchmark_parser.py `__ -- the log parser script to be run on the concurrency benchmark log file to generate tabulated data. Launch the service ------------------ The service is deployed using a Slurm batch script that orchestrates the containers across the allocated nodes. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/sglang-distributed-benchmark-models.yaml {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.model_repo }} .. code-block:: shell # Clone the MAD repo if you haven't already and # navigate to the scripts directory git clone https://github.com/ROCm/MAD.git cd MAD/scripts/sglang_disagg/ # Slurm sbatch run command export DOCKER_IMAGE_NAME=sglang_disagg_pd_image export xP= export yD= export MODEL_NAME={{ model.model_repo }} # num_nodes = xP + yD + 1 sbatch -N -n --nodelist= run_xPyD_models.slurm {% endfor %} {% endfor %} Post-run logs and testing ------------------------- Logs are stored in your shared filesystem in the directory specified by the ``LOG_PATH`` variable in the Slurm script. A new directory named after the Slurm job ID is created for each run. Inside that directory, you can access various logs: * ``pd_sglang_bench_serving.sh_NODE<...>.log`` -- the main log for each server node. * ``etcd_NODE<...>.log`` -- logs for etcd services. * ``prefill_NODE<...>.log`` -- logs for the prefill services. * ``decode_NODE<...>.log`` -- logs for the decode services. Use the benchmark parser script for concurrency logs to tabulate different data. .. code-block:: shell python3 benchmark_parser.py To verify the service is responsive, you can try sending a ``curl`` request to test the launched server from the Docker container on the proxy node. For example: .. code-block:: shell curl -X POST http://127.0.0.1:30000/generate \ -H "Content-Type: application/json" \ -d '{ "text": "Let me tell you a story ", "sampling_params": { "temperature": 0.3 } }' Known issues ============ When running larger models, such as DeepSeek-V3 and Llama-3.1-405B-Instruct-FP8-KV, at higher concurrency levels (512+), the following error might occur: .. code-block:: shell-session `__. - To learn more about the options for latency and throughput benchmark scripts, see ``__. - See the base upstream Docker image on `Docker Hub `__. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `__. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/sglang-history` to find documentation for previous releases of SGLang inference performance testing. --- .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and SGLang :keywords: model, MAD, automation, dashboarding, validate ***************************************************************** SGLang inference performance testing DeepSeek-R1-Distill-Qwen-32B ***************************************************************** .. _sglang-benchmark-unified-docker: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/sglang-benchmark-models.yaml {% set docker = data.dockers[0] %} `SGLang `__ is a high-performance inference and serving engine for large language models (LLMs) and vision models. The ROCm-enabled `SGLang Docker image <{{ docker.docker_hub_url }}>`__ bundles SGLang with PyTorch, optimized for AMD Instinct MI300X Series GPUs. It includes the following software components: .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/sglang-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Pull the Docker image ===================== Download the `SGLang Docker image <{{ unified_docker.docker_hub_url }}>`__. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Benchmarking ============ Once the setup is complete, choose one of the following methods to benchmark inference performance with `DeepSeek-R1-Distill-Qwen-32B `__. .. _sglang-benchmark-mad: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one GPU with the ``{{model.precision}}`` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/perf_DeepSeek-R1-Distill-Qwen-32B.csv``. Although the DeepSeek-R1-Distill-Qwen-32B is preconfigured to collect latency and throughput performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required scripts 1. Run the SGLang benchmark script independently by starting the `Docker container <{{ unified_docker.docker_hub_url }}>`__ as shown in the following snippet. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ unified_docker.pull_tag }} 2. In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``~/MAD/scripts/sglang``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/sglang 3. To start the benchmark, use the following command with the appropriate options. .. dropdown:: Benchmark options :open: .. list-table:: :header-rows: 1 :align: center * - Name - Options - Description * - ``$test_option`` - latency - Measure decoding token latency * - - throughput - Measure token generation throughput * - - all - Measure both throughput and latency * - ``$num_gpu`` - 8 - Number of GPUs * - ``$datatype`` - ``bfloat16`` - Data type * - ``$dataset`` - random - Dataset The input sequence length, output sequence length, and tensor parallel (TP) are already configured. You don't need to specify them with this script. Command: .. code-block:: shell ./sglang_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d $datatype [-a $dataset] .. note:: If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: shell-session OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. rubric:: Benchmarking examples Here are some examples of running the benchmark with various options: * Latency benchmark Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with ``{{model.precision}}`` precision. .. code-block:: shell ./sglang_benchmark_report.sh \ -s latency \ -m {{model.model_repo}} \ -g 8 \ -d {{model.precision}} Find the latency report at ``./reports_{{model.precision}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``. * Throughput benchmark Use this command to benchmark the throughput of the {{model.model}} model on eight GPUs with ``{{model.precision}}`` precision. .. code-block:: shell ./sglang_benchmark_report.sh \ -s throughput \ -m {{model.model_repo}} \ -g 8 \ -d {{model.precision}} \ -a random Find the throughput report at ``./reports_{{model.precision}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``. .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``__. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for MI300X Series GPUs, see `AMD Instinct MI300X system optimization `__. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - To learn how to run community models from Hugging Face on AMD GPUs, see :doc:`Running models from Hugging Face `. - To learn how to fine-tune LLMs and optimize inference, see :doc:`Fine-tuning LLMs and inference optimization `. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/sglang-history` to find documentation for previous releases of SGLang inference performance testing. --- .. meta:: :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image. :keywords: model, MAD, automation, dashboarding, validate ********************************** vLLM inference performance testing ********************************** .. _vllm-benchmark-unified-docker-1210: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/vllm-benchmark-models.yaml {% set docker = data.dockers[0] %} The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers a prebuilt, optimized environment for validating large language model (LLM) inference performance on AMD Instinct™ MI355X, MI350X, MI325X and MI300X GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for AMD data center GPUs and includes the following components: .. tab-set:: .. tab-item:: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} With this Docker image, you can quickly test the :ref:`expected inference performance numbers ` for AMD Instinct GPUs. What's new ========== The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release `. - Improved performance on Llama 3 MXFP4 through AITER optimizations and improved kernel fusion. .. _vllm-benchmark-supported-models-1210: Supported models ================ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/vllm-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. _vllm-benchmark-available-models-1210: The following models are supported for inference performance benchmarking with vLLM and ROCm. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. MXFP4 models are only supported on MI355X and MI350X GPUs. .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _vllm-benchmark-vllm-1210: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} {% if model.precision == "float4" %} .. important:: MXFP4 is supported only on MI355X and MI350X GPUs. {% endif %} {% if model.mad_tag in ["pyt_vllm_mixtral-8x7b", "pyt_vllm_mixtral-8x7b_fp8", "pyt_vllm_mixtral-8x22b", "pyt_vllm_mixtral-8x22b_fp8", "pyt_vllm_deepseek-r1"] %} .. caution:: There is a known regression with AITER for MoE models such as Mixtral and DeepSeek-R1. Consider using the :doc:`previous release ` ``rocm/vllm:rocm7.0.0_vllm_0.11.1_20251103`` for better performance. {% endif %} .. note:: See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model. Some models require access authorization prior to use via an external license agreement through a third party. {% if model.precision == "float8" and model.model_repo.startswith("amd") %} This model uses FP8 quantization via `AMD Quark `__ for efficient inference on AMD GPUs. {% endif %} {% if model.precision == "float4" and model.model_repo.startswith("amd") %} This model uses FP4 quantization via `AMD Quark `__ for efficient inference on AMD GPUs. {% endif %} {% endfor %} {% endfor %} .. _vllm-benchmark-performance-measurements-1210: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and serving measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `_ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/vllm-benchmark-models.yaml {% set docker = data.dockers[0] %} Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} Benchmarking ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/vllm-benchmark-models.yaml {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to reproduce the benchmark results: .. _vllm-benchmark-mad-1210: {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: .. tab-item:: MAD-integrated benchmarking The following run command is tailored to {{ model.model }}. See :ref:`vllm-benchmark-supported-models-1210` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node with the :literal:`{{model.precision}}` data type. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. Although the :ref:`available models ` are preconfigured to collect offline throughput and online serving performance data, you can also change the benchmarking parameters. See the standalone benchmarking tab for more information. {% if model.tunableop %} .. note:: For improved performance, consider enabling :ref:`PyTorch TunableOp `. TunableOp automatically explores different implementations and configurations of certain PyTorch operators to find the fastest one for your hardware. By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see ``__). To enable it, include the ``--tunableop on`` argument in your run. Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run. {% endif %} .. tab-item:: Standalone benchmarking The following commands are optimized for {{ model.model }}. See :ref:`vllm-benchmark-supported-models-1210` to switch to another available model. .. seealso:: For more information on configuration, see the `config files `__ in the MAD repository. Refer to the `vLLM engine `__ for descriptions of available configuration options and `Benchmarking vLLM `__ for additional benchmarking information. .. rubric:: Launch the container You can run the vLLM benchmark tool independently by starting the `Docker container <{{ docker.docker_hub_url }}>`_ as shown in the following snippet. .. code-block:: shell docker pull {{ docker.pull_tag }} docker run -it \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --shm-size 16G \ --security-opt seccomp=unconfined \ --security-opt apparmor=unconfined \ --cap-add=SYS_PTRACE \ -v $(pwd):/workspace \ --env HUGGINGFACE_HUB_CACHE=/workspace \ --name test \ {{ docker.pull_tag }} .. rubric:: Throughput command Use the following command to start the throughput benchmark. .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} num_prompts={{ model.config.num_prompts | default(1024) }} in={{ model.config.in | default(128) }} out={{ model.config.in | default(128) }} dtype={{ model.config.dtype | default("auto") }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs={{ model.config.max_num_seqs | default(1024) }} max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm bench throughput --model $model \ -tp $tp \ --num-prompts $num_prompts \ --input-len $in \ --output-len $out \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --trust-remote-code \ --output-json ${model}_throughput.json \ --gpu-memory-utilization {{ model.config.gpu_memory_utilization | default(0.9) }} .. rubric:: Serving command 1. Start the server using the following command: .. code-block:: shell model={{ model.model_repo }} tp={{ model.config.tp }} dtype={{ model.config.dtype }} kv_cache_dtype={{ model.config.kv_cache_dtype }} max_num_seqs=256 max_num_batched_tokens={{ model.config.max_num_batched_tokens }} max_model_len={{ model.config.max_model_len }} vllm serve $model \ -tp $tp \ --dtype $dtype \ --kv-cache-dtype $kv_cache_dtype \ --max-num-seqs $max_num_seqs \ --max-num-batched-tokens $max_num_batched_tokens \ --max-model-len $max_model_len \ --no-enable-prefix-caching \ --swap-space 16 \ --disable-log-requests \ --trust-remote-code \ --gpu-memory-utilization 0.9 Wait until the model has loaded and the server is ready to accept requests. 2. On another terminal on the same machine, run the benchmark: .. code-block:: shell # Connect to the container docker exec -it test bash # Wait for the server to start until curl -s http://localhost:8000/v1/models; do sleep 30; done # Run the benchmark model={{ model.model_repo }} max_concurrency=1 num_prompts=10 in=128 out=128 vllm bench serve --model $model \ --percentile-metrics "ttft,tpot,itl,e2el" \ --dataset-name random \ --ignore-eos \ --max-concurrency $max_concurrency \ --num-prompts $num_prompts \ --random-input-len $in \ --random-output-len $out \ --trust-remote-code \ --save-result \ --result-filename ${model}_serving.json .. note:: For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B, try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands. If you encounter the following error, pass your access-authorized Hugging Face token to the gated models. .. code-block:: OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. raw:: html .. note:: Throughput is calculated as: - .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time - .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time {% endfor %} {% endfor %} Advanced usage ============== For information on experimental features and known issues related to ROCm optimization efforts on vLLM, see the developer's guide at ``__. .. note:: If you’re using this Docker image on other AMD GPUs such as the AMD Instinct MI200 Series or Radeon, add ``export VLLM_ROCM_USE_AITER=0`` to your command, since AITER is only supported on gfx942 and gfx950 architectures. Reproducing the Docker image ---------------------------- To reproduce this ROCm-enabled vLLM Docker image release, follow these steps: 1. Clone the `vLLM repository `__. .. code-block:: shell git clone https://github.com/vllm-project/vllm.git cd vllm 2. Use the following command to build the image directly from the specified commit. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/vllm-benchmark-models.yaml {% set docker = data.dockers[0] %} .. code-block:: shell docker build -f docker/Dockerfile.rocm \ --build-arg REMOTE_VLLM=1 \ --build-arg VLLM_REPO=https://github.com/ROCm/vllm \ --build-arg VLLM_BRANCH="{{ docker.dockerfile.commit }}" \ -t vllm-rocm . .. tip:: Replace ``vllm-rocm`` with your desired image tag. Known issues ============ There is a known regression with AITER for MoE models such as Mixtral and DeepSeek-R1. Consider using the :doc:`previous release ` (``rocm/vllm:rocm7.0.0_vllm_0.11.1_20251103``) for better performance. Further reading =============== - To learn more about the options for latency and throughput benchmark scripts, see ``_. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for a brief introduction to vLLM and optimization strategies. - For application performance optimization strategies for HPC and AI workloads, including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/vllm-history` to find documentation for previous releases of the ``ROCm/vllm`` Docker image. --- .. meta:: :description: How to deploy your model for AI inference using vLLM and Hugging Face TGI. :keywords: ROCm, AI, LLM, train, fine-tune, deploy, FSDP, DeepSpeed, LLaMA, tutorial ******************** Deploying your model ******************** ROCm enables inference and deployment for various classes of models including CNN, RNN, LSTM, MLP, and transformers. This section focuses on deploying transformers-based LLM models. ROCm supports vLLM and Hugging Face TGI as major LLM-serving frameworks. .. _rocm-for-ai-serve-vllm: Serving using vLLM ================== vLLM is a fast and easy-to-use library for LLM inference and serving. AMD is actively working with the vLLM team to improve performance and support the latest ROCm versions. See the `GitHub repository `_ and `official vLLM documentation `_ for more information. For guidance on using vLLM with ROCm, refer to `Installation with ROCm `__. vLLM installation ----------------- vLLM supports two ROCm-capable installation methods. Refer to the official documentation use the following links. - `Build from source with Docker `_ (recommended) - `Build from source `_ vLLM walkthrough ---------------- Refer to this developer blog for guidance on serving with vLLM `Inferencing and serving with vLLM on AMD GPUs — ROCm Blogs `_ Validating vLLM performance --------------------------- ROCm provides a prebuilt optimized Docker image for validating the performance of LLM inference with vLLM on the MI300X GPU. The Docker image includes ROCm, vLLM, PyTorch, and tuning files in the CSV format. For more information, see the guide to `LLM inference performance testing with vLLM on the AMD Instinct™ MI300X GPU `_ on the ROCm GitHub repository. .. _rocm-for-ai-serve-hugging-face-tgi: Serving using Hugging Face TGI ============================== The `Hugging Face Text Generation Inference `_ (TGI) library is optimized for serving LLMs with low latency. Refer to the `Quick tour of TGI `_ for more details. TGI installation ---------------- The easiest way to use Hugging Face TGI with ROCm on AMD Instinct GPUs is to use the official Docker image at ``__. TGI walkthrough --------------- #. Set up the LLM server. Deploy the Llama2 7B model with TGI using the official Docker image. .. code-block:: shell model=TheBloke/Llama-2-7B-fp16 volume=$PWD docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd --device=/dev/dri --group-add video --ipc=host --shm-size 1g -p 8080:80 -v $volume:/data --name tgi_amd ghcr.io/huggingface/text-generation-inference:1.2-rocm --model-id $model #. Set up the client. a. Open another shell session and run the following command to access the server with the client URL. .. code-block:: shell curl 127.0.0.1:8080/generate \\ -X POST \\ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \\ -H 'Content-Type: application/json' b. Access the server with request endpoints. .. code-block:: shell pip install request PYTHONPATH=/usr/lib/python3/dist-packages python requests_model.py ``requests_model.py`` should look like: .. code-block:: python import requests headers = { "Content-Type": "application/json", } data = { 'inputs': 'What is Deep Learning?', 'parameters': { 'max_new_tokens': 20 }, } response = requests.post('http://127.0.0.1:8080/generate', headers=headers, json=data) print(response.json()) vLLM and Hugging Face TGI are robust solutions for anyone looking to deploy LLMs for applications that demand high performance, low latency, and scalability. Visit the topics in :doc:`Using ROCm for AI <../index>` to learn about other ROCm-aware solutions for AI development. --- .. meta:: :description: How to run models from Hugging Face on AMD GPUs. :keywords: ROCm, AI, LLM, Hugging Face, Optimum, Flash Attention, GPTQ, ONNX, tutorial ******************************** Running models from Hugging Face ******************************** `Hugging Face `_ hosts the world’s largest AI model repository for developers to obtain transformer models. Hugging Face models and tools significantly enhance productivity, performance, and accessibility in developing and deploying AI solutions. This section describes how to run popular community transformer models from Hugging Face on AMD GPUs. .. _rocm-for-ai-hugging-face-transformers: Using Hugging Face Transformers ------------------------------- First, `install the Hugging Face Transformers library `_, which lets you easily import any of the transformer models into your Python application. .. code-block:: shell pip install transformers Here is an example of running `GPT2 `_: .. code-block:: python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me with any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) Mainstream transformer models are regularly tested on supported hardware platforms. Models derived from those core models should also function correctly. Here are some mainstream models to get you started: - `BERT `_ - `BLOOM `_ - `Llama `_ - `OPT `_ - `T5 `_ .. _rocm-for-ai-hugging-face-optimum: Using Hugging Face with Optimum-AMD ----------------------------------- Optimum-AMD is the interface between Hugging Face libraries and the ROCm software stack. For a deeper dive into using Hugging Face libraries on AMD GPUs, refer to the `Optimum-AMD `_ page on Hugging Face for guidance on using Flash Attention 2, GPTQ quantization and the ONNX Runtime integration. Hugging Face libraries natively support AMD Instinct GPUs. For other :doc:`ROCm-capable hardware `, support is currently not validated, but most features are expected to work without issues. .. _rocm-for-ai-install-optimum-amd: Installation ~~~~~~~~~~~~ Install Optimum-AMD using pip. .. code-block:: shell pip install --upgrade --upgrade-strategy eager optimum[amd] Or, install from source. .. code-block:: shell git clone https://github.com/huggingface/optimum-amd.git cd optimum-amd pip install -e . .. _rocm-for-ai-flash-attention: Flash Attention --------------- #. Use `the Hugging Face team's example Dockerfile `_ to use Flash Attention with ROCm. .. code-block:: shell docker build -f Dockerfile -t transformers_pytorch_amd_gpu_flash . volume=$PWD docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $volume:/workspace --name transformer_amd transformers_pytorch_amd_gpu_flash:latest #. Use Flash Attention 2 with `Transformers `_ by adding the ``use_flash_attention_2`` parameter to ``from_pretrained()``: .. code-block:: python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b") with torch.device("cuda"): model = AutoModelForCausalLM.from_pretrained( "tiiuae/falcon-7b", torch_dtype=torch.float16, use_flash_attention_2=True, ) .. _rocm-for-ai-gptq: GPTQ ---- To enable `GPTQ `_, hosted wheels are available for ROCm. #. First, :ref:`install Optimum-AMD `. #. Install AutoGPTQ using pip. Refer to `AutoGPTQ Installation `_ for in-depth guidance. .. code-block:: shell pip install auto-gptq --no-build-isolation --extra-index-url https://huggingface.github.io/autogptq-index/whl/rocm573/ Or, to install from source for AMD GPUs supporting ROCm, specify the ``ROCM_VERSION`` environment variable. .. code-block:: shell ROCM_VERSION=6.1 pip install -vvv --no-build-isolation -e . #. Load GPTQ-quantized models in Transformers using the backend `AutoGPTQ library `_: .. code-block:: python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM tokenizer = AutoTokenizer.from_pretrained("TheBloke/Llama-2-7B-Chat-GPTQ") with torch.device("cuda"): model = AutoModelForCausalLM.from_pretrained( "TheBloke/Llama-2-7B-Chat-GPTQ", torch_dtype=torch.float16, ) .. _rocm-for-ai-onnx: ONNX ---- Hugging Face Optimum also supports the `ONNX Runtime `_ integration. For ONNX models, usage is straightforward. #. Specify the provider argument in the ``ORTModel.from_pretrained()`` method: .. code-block:: python from optimum.onnxruntime import ORTModelForSequenceClassification .. ort_model = ORTModelForSequenceClassification.from_pretrained( .. provider="ROCMExecutionProvider" ) #. Try running a `BERT text classification `_ ONNX model with ROCm: .. code-block:: python from optimum.onnxruntime import ORTModelForSequenceClassification from optimum.pipelines import pipeline from transformers import AutoTokenizer import onnxruntime as ort session_options = ort.SessionOptions() session_options.log_severity_level = 0 ort_model = ORTModelForSequenceClassification.from_pretrained( "distilbert-base-uncased-finetuned-sst-2-english", export=True, provider="ROCMExecutionProvider", session_options=session_options ) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") pipe = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0") result = pipe("Both the music and visual were astounding, not to mention the actors performance.") --- .. meta:: :description: How to use ROCm for AI inference workloads. :keywords: ROCm, AI, machine learning, LLM, AI inference, NLP, GPUs, usage, tutorial **************************** Use ROCm for AI inference **************************** AI inference is a process of deploying a trained machine learning model to make predictions or classifications on new data. This commonly involves using the model with real-time data and making quick decisions based on the predictions made by the model.  Understanding the ROCm™ software platform’s architecture and capabilities is vital for running AI inference. By leveraging the ROCm platform's capabilities, you can harness the power of high-performance computing and efficient resource management to run inference workloads, leading to faster predictions and classifications on real-time data. Throughout the following topics, this section provides a comprehensive guide to setting up and deploying AI inference on AMD GPUs. This includes instructions on how to install ROCm, how to use Hugging Face Transformers to manage pre-trained models for natural language processing (NLP) tasks, how to validate vLLM on AMD Instinct™ MI300X GPUs and illustrate how to deploy trained models in production environments. The AI Developer Hub contains `AMD ROCm tutorials `_ for training, fine-tuning, and inference. It leverages popular machine learning frameworks on AMD GPUs. - :doc:`Installing ROCm and machine learning frameworks <../install>` - :doc:`Running models from Hugging Face ` - :doc:`LLM inference frameworks ` - :doc:`vLLM inference performance testing ` - :doc:`PyTorch inference performance testing ` - :doc:`SGLang inference performance testing ` - :doc:`xDiT diffusion inference ` - :doc:`Deploying your model ` --- .. meta:: :description: How to implement the LLM inference frameworks with ROCm acceleration. :keywords: ROCm, LLM, fine-tuning, usage, tutorial, inference, vLLM, TGI, text generation inference ************************ LLM inference frameworks ************************ This section discusses how to implement `vLLM `_ and `Hugging Face TGI `_ using :doc:`single-accelerator <../fine-tuning/single-gpu-fine-tuning-and-inference>` and :doc:`multi-accelerator <../fine-tuning/multi-gpu-fine-tuning-and-inference>` systems. .. _fine-tuning-llms-vllm: vLLM inference ============== vLLM is renowned for its PagedAttention algorithm that can reduce memory consumption and increase throughput thanks to its paging scheme. Instead of allocating GPU high-bandwidth memory (HBM) for the maximum output token lengths of the models, the paged attention of vLLM allocates GPU HBM dynamically for its actual decoding lengths. This paged attention is also effective when multiple requests share the same key and value contents for a large value of beam search or multiple parallel requests. vLLM also incorporates many modern LLM acceleration and quantization algorithms, such as Flash Attention, HIP and CUDA graphs, tensor parallel multi-GPU, GPTQ, AWQ, and token speculation. Installing vLLM --------------- .. _fine-tuning-llms-vllm-rocm-docker-image: 1. Run the following commands to build a Docker image ``vllm-rocm``. .. code-block:: shell git clone https://github.com/vllm-project/vllm.git cd vllm docker build -f docker/Dockerfile.rocm -t vllm-rocm . .. tab-set:: .. tab-item:: vLLM on a single-accelerator system :sync: single 2. To use vLLM as an API server to serve reference requests, first start a container using the :ref:`vllm-rocm Docker image `. .. code-block:: shell docker run -it \ --network=host \ --group-add=video \ --ipc=host \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --device /dev/kfd \ --device /dev/dri \ -v :/app/model \ vllm-rocm \ bash 3. Inside the container, start the API server to run on a single GPU on port 8000 using the following command. .. code-block:: shell python -m vllm.entrypoints.api_server --model /app/model --dtype float16 --port 8000 & The following log message is displayed in your command line indicates that the server is listening for requests. .. image:: ../../../data/how-to/llm-fine-tuning-optimization/vllm-single-gpu-log.png :alt: vLLM API server log message :align: center 4. To test, send it a curl request containing a prompt. .. code-block:: shell curl http://localhost:8000/generate -H "Content-Type: application/json" -d '{"prompt": "What is AMD Instinct?", "max_tokens": 80, "temperature": 0.0 }' You should receive a response like the following. .. code-block:: text {"text":["What is AMD Instinct?\nAmd Instinct is a brand new line of high-performance computing (HPC) processors from Advanced Micro Devices (AMD). These processors are designed to deliver unparalleled performance for HPC workloads, including scientific simulations, data analytics, and machine learning.\nThe Instinct lineup includes a range of processors, from the entry-level Inst"]} .. tab-item:: vLLM on a multi-accelerator system :sync: multi 2. To use vLLM as an API server to serve reference requests, first start a container using the :ref:`vllm-rocm Docker image `. .. code-block:: shell docker run -it \ --network=host \ --group-add=video \ --ipc=host \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --device /dev/kfd \ --device /dev/dri \ -v :/app/model \ vllm-rocm \ bash 3. To run API server on multiple GPUs, use the ``-tp`` or ``--tensor-parallel-size`` parameter. For example, to use two GPUs, start the API server using the following command. .. code-block:: shell python -m vllm.entrypoints.api_server --model /app/model --dtype float16 -tp 2 --port 8000 & 4. To run multiple instances of API Servers, specify different ports for each server, and use ``ROCR_VISIBLE_DEVICES`` to isolate each instance to a different GPU. For example, to run two API servers, one on port 8000 using GPU 0 and 1, one on port 8001 using GPU 2 and 3, use a a command like the following. .. code-block:: shell ROCR_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.api_server --model /data/llama-2-7b-chat-hf --dtype float16 –tp 2 --port 8000 & ROCR_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.api_server --model /data/llama-2-7b-chat-hf --dtype float16 –tp 2--port 8001 & 5. To test, send it a curl request containing a prompt. .. code-block:: shell curl http://localhost:8000/generate -H "Content-Type: application/json" -d '{"prompt": "What is AMD Instinct?", "max_tokens": 80, "temperature": 0.0 }' You should receive a response like the following. .. code-block:: text {"text":["What is AMD Instinct?\nAmd Instinct is a brand new line of high-performance computing (HPC) processors from Advanced Micro Devices (AMD). These processors are designed to deliver unparalleled performance for HPC workloads, including scientific simulations, data analytics, and machine learning.\nThe Instinct lineup includes a range of processors, from the entry-level Inst"]} .. seealso:: See :ref:`mi300x-vllm-optimization` for performance optimization tips. ROCm provides a prebuilt optimized Docker image for validating the performance of LLM inference with vLLM on the MI300X GPU. The Docker image includes ROCm, vLLM, and PyTorch. For more information, see :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm`. .. _fine-tuning-llms-tgi: Hugging Face TGI ================ Text Generation Inference (TGI) is LLM serving framework from Hugging Face, and it also supports the majority of high-performance LLM acceleration algorithms such as Flash Attention, Paged Attention, CUDA/HIP graph, tensor parallel multi-GPU, GPTQ, AWQ, and token speculation. .. tip:: In addition to LLM serving capability, TGI also provides the `Text Generation Inference benchmarking tool `_. Install TGI ----------- 1. Launch the TGI Docker container in the host machine. .. code-block:: shell docker run --name tgi --rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd --device=/dev/dri --group-add video --ipc=host --shm-size 256g --net host -v $PWD:/data --entrypoint "/bin/bash" --env HUGGINGFACE_HUB_CACHE=/data ghcr.io/huggingface/text-generation-inference:latest-rocm .. tab-set:: .. tab-item:: TGI on a single-accelerator system :sync: single 2. Inside the container, launch a model using TGI server on a single GPU. .. code-block:: shell export ROCM_USE_FLASH_ATTN_V2_TRITON=True text-generation-launcher --model-id NousResearch/Meta-Llama-3-70B --dtype float16 --port 8000 & 3. To test, send it a curl request containing a prompt. .. code-block:: shell curl http://localhost:8000/generate_stream -X POST -d '{"inputs":"What is AMD Instinct?","parameters":{"max_new_tokens":20}}' -H 'Content-Type: application/json' You should receive a response like the following. .. code-block:: shell data:{"index":20,"token":{"id":304,"text":" in","logprob":-1.2822266,"special":false},"generated_text":" AMD Instinct is a new family of data center GPUs designed to accelerate the most demanding workloads in","details":null} .. tab-item:: TGI on a multi-accelerator system 2. Inside the container, launch a model using TGI server on multiple GPUs (four in this case). .. code-block:: shell export ROCM_USE_FLASH_ATTN_V2_TRITON=True text-generation-launcher --model-id NousResearch/Meta-Llama-3-8B --dtype float16 --port 8000 --num-shard 4 & 3. To test, send it a curl request containing a prompt. .. code-block:: shell curl http://localhost:8000/generate_stream -X POST -d '{"inputs":"What is AMD Instinct?","parameters":{"max_new_tokens":20}}' -H 'Content-Type: application/json' You should receive a response like the following. .. code-block:: shell data:{"index":20,"token":{"id":304,"text":" in","logprob":-1.2773438,"special":false},"generated_text":" AMD Instinct is a new family of data center GPUs designed to accelerate the most demanding workloads in","details":null} --- .. meta:: :description: Learn to validate diffusion model video generation on MI300X, MI350X and MI355X accelerators using prebuilt and optimized docker images. :keywords: xDiT, diffusion, video, video generation, image, image generation, validate, benchmark ************************ xDiT diffusion inference ************************ .. _xdit-video-diffusion: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} The `rocm/pytorch-xdit <{{ docker.docker_hub_url }}>`_ Docker image offers a prebuilt, optimized environment based on `xDiT `_ for benchmarking diffusion model video and image generation on AMD Instinct MI355X, MI350X (gfx950), MI325X, and MI300X (gfx942) GPUs. The image runs a preview version of ROCm using the new `TheRock `__ build system and includes the following components: .. dropdown:: Software components - {{ docker.pull_tag.split('-')|last }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_data in docker.components.items() %} * - `{{ component_name }} <{{ component_data.url }}>`_ - {{ component_data.version }} {% endfor %} Follow this guide to pull the required image, spin up a container, download the model, and run a benchmark. For preview and development releases, see `amdsiloai/pytorch-xdit `_. What's new ========== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} {% for item in docker.whats_new %} * {{ item }} {% endfor %} .. _xdit-video-diffusion-supported-models: Supported models ================ The following models are supported for inference performance benchmarking. Some instructions, commands, and recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} .. raw:: html
Model
{% for model_group in docker.supported_models %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in docker.supported_models %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
{% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{ model.js_tag }} .. note:: To learn more about your specific model see the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ or visit the `GitHub page <{{ model.github }}>`__. Note that some models require access authorization before use via an external license agreement through a third party. {% endfor %} {% endfor %} Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and serving measurements for inferencing popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this inference benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} For this tutorial, it's recommended to use the latest ``{{ docker.pull_tag }}`` Docker image. Pull the image using the following command: .. code-block:: shell docker pull {{ docker.pull_tag }} Validate and benchmark ====================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} Once the image has been downloaded you can follow these steps to run benchmarks and generate outputs. {% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{model.js_tag}} The following commands are written for {{ model.model }}. See :ref:`xdit-video-diffusion-supported-models` to switch to another available model. {% endfor %} {% endfor %} Choose your setup method ------------------------ You can either use an existing Hugging Face cache or download the model fresh inside the container. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} {% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{model.js_tag}} .. tab-set:: .. tab-item:: Option 1: Use existing Hugging Face cache If you already have models downloaded on your host system, you can mount your existing cache. 1. Set your Hugging Face cache location. .. code-block:: shell export HF_HOME=/your/hf_cache/location 2. Download the model (if not already cached). .. code-block:: shell huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} 3. Launch the container with mounted cache. .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ -e HF_HOME=/app/huggingface_models \ -v $HF_HOME:/app/huggingface_models \ {{ docker.pull_tag }} .. tab-item:: Option 2: Download inside container If you prefer to keep the container self-contained or don't have an existing cache. 1. Launch the container .. code-block:: shell docker run \ -it --rm \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --user root \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ --ipc=host \ --network host \ --privileged \ --shm-size 128G \ --name pytorch-xdit \ -e HSA_NO_SCRATCH_RECLAIM=1 \ -e OMP_NUM_THREADS=16 \ -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ {{ docker.pull_tag }} 2. Inside the container, set the Hugging Face cache location and download the model. .. code-block:: shell export HF_HOME=/app/huggingface_models huggingface-cli download {{ model.model_repo }} {% if model.revision %} --revision {{ model.revision }} {% endif %} .. warning:: Models will be downloaded to the container's filesystem and will be lost when the container is removed unless you persist the data with a volume. {% endfor %} {% endfor %} Run inference ============= .. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/xdit-inference-models.yaml {% set docker = data.docker %} {% for model_group in docker.supported_models %} {% for model in model_group.models %} .. container:: model-doc {{ model.js_tag }} .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. On the host machine, use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model using one node. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv`` and ``{{ model.mad_tag }}_serving.csv``. .. tab-item:: Standalone benchmarking To run the benchmarks for {{ model.model }}, use the following command: .. code-block:: shell {% if model.model == "Hunyuan Video" %} cd /app/Hunyuanvideo mkdir results torchrun --nproc_per_node=8 run.py \ --model {{ model.model_repo }} \ --prompt "In the large cage, two puppies were wagging their tails at each other." \ --height 720 --width 1280 --num_frames 129 \ --num_inference_steps 50 --warmup_steps 1 --n_repeats 1 \ --ulysses_degree 8 \ --enable_tiling --enable_slicing \ --use_torch_compile \ --bench_output results {% endif %} {% if model.model == "Wan2.1" %} cd /app/Wan mkdir results torchrun --nproc_per_node=8 /app/Wan/run.py \ --task i2v \ --height 720 \ --width 1280 \ --model {{ model.model_repo }} \ --img_file_path /app/Wan/i2v_input.JPG \ --ulysses_degree 8 \ --seed 42 \ --num_frames 81 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --num_repetitions 1 \ --num_inference_steps 40 \ --use_torch_compile {% endif %} {% if model.model == "Wan2.2" %} cd /app/Wan mkdir results torchrun --nproc_per_node=8 /app/Wan/run.py \ --task i2v \ --height 720 \ --width 1280 \ --model {{ model.model_repo }} \ --img_file_path /app/Wan/i2v_input.JPG \ --ulysses_degree 8 \ --seed 42 \ --num_frames 81 \ --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." \ --num_repetitions 1 \ --num_inference_steps 40 \ --use_torch_compile {% endif %} {% if model.model == "FLUX.1" %} cd /app/Flux mkdir results torchrun --nproc_per_node=8 /app/Flux/run.py \ --model {{ model.model_repo }} \ --seed 42 \ --prompt "A small cat" \ --height 1024 \ --width 1024 \ --num_inference_steps 25 \ --max_sequence_length 256 \ --warmup_steps 5 \ --no_use_resolution_binning \ --ulysses_degree 8 \ --use_torch_compile \ --num_repetitions 50 {% endif %} {% if model.model == "FLUX.1 Kontext" %} cd /app/Flux mkdir results torchrun --nproc_per_node=8 /app/Flux/run_usp.py \ --model {{ model.model_repo }} \ --seed 42 \ --prompt "Add a cool hat to the cat" \ --height 1024 \ --width 1024 \ --num_inference_steps 30 \ --max_sequence_length 512 \ --warmup_steps 5 \ --no_use_resolution_binning \ --ulysses_degree 8 \ --use_torch_compile \ --img_file_path /app/Flux/cat.png \ --model_type flux_kontext \ --guidance_scale 2.5 \ --num_repetitions 25 {% endif %} {% if model.model == "FLUX.2" %} cd /app/Flux mkdir results torchrun --nproc_per_node=8 /app/Flux/run_usp.py \ --model {{ model.model_repo }} \ --seed 42 \ --prompt "Add a cool hat to the cat" \ --height 1024 \ --width 1024 \ --num_inference_steps 50 \ --max_sequence_length 512 \ --warmup_steps 5 \ --no_use_resolution_binning \ --ulysses_degree 8 \ --use_torch_compile \ --img_file_paths /app/Flux/cat.png \ --model_type flux2 \ --guidance_scale 4.0 \ --num_repetitions 25 {% endif %} {% if model.model == "stable-diffusion-3.5-large" %} cd /app/StableDiffusion3.5 mkdir results torchrun --nproc_per_node=8 /app/StableDiffusion3.5/run.py \ --model {{ model.model_repo }} \ --num_inference_steps 28 \ --prompt "A capybara holding a sign that reads Hello World" \ --use_torch_compile \ --pipefusion_parallel_degree 4 \ --use_cfg_parallel \ --num_repetitions 50 \ --dtype torch.float16 \ --output_path results {% endif %} The generated video will be stored under the results directory. For the actual benchmark step runtimes, see {% if model.model == "Hunyuan Video" %}stdout.{% elif model.model in ["Wan2.1", "Wan2.2"] %}results/outputs/rank0_*.json{% elif model.model in ["FLUX.1", "FLUX.1 Kontext", "FLUX.2"] %}results/timing.json{% elif model.model == "stable-diffusion-3.5-large"%}benchmark_results.csv{% endif %} {% if model.model == "FLUX.1" %}You may also use ``run_usp.py`` which implements USP without modifying the default diffusers pipeline. {% endif %} {% endfor %} {% endfor %} Previous versions ================= See :doc:`benchmark-docker/previous-versions/xdit-history` to find documentation for previous releases of xDiT diffusion inference performance testing. --- .. meta:: :description: How to Use ROCm for AI inference optimization :keywords: ROCm, LLM, AI inference, Optimization, GPUs, usage, tutorial ******************************************* Use ROCm for AI inference optimization ******************************************* AI inference optimization is the process of improving the performance of machine learning models and speeding up the inference process. It includes: - **Quantization**: This involves reducing the precision of model weights and activations while maintaining acceptable accuracy levels. Reduced precision improves inference efficiency because lower precision data requires less storage and better utilizes the hardware's computation power. - **Kernel optimization**: This technique involves optimizing computation kernels to exploit the underlying hardware capabilities. For example, the kernels can be optimized to use multiple GPU cores or utilize specialized hardware like tensor cores to accelerate the computations. - **Libraries**: Libraries such as Flash Attention, xFormers, and PyTorch TunableOp are used to accelerate deep learning models and improve the performance of inference workloads. - **Hardware acceleration**: Hardware acceleration techniques, like GPUs for AI inference, can significantly improve performance due to their parallel processing capabilities. - **Pruning**: This involves removing unnecessary connections, layers, or weights from a pre-trained model while maintaining acceptable accuracy levels, resulting in a smaller model that requires fewer computational resources to run inference. Utilizing these optimization techniques with the ROCm™ software platform can significantly reduce inference time, improve performance, and reduce the cost of your AI applications. Throughout the following topics, this guide discusses optimization techniques for inference workloads. - :doc:`Model quantization ` - :doc:`Model acceleration libraries ` - :doc:`Optimizing with Composable Kernel ` - :doc:`Optimizing Triton kernels ` - :doc:`Profiling and debugging ` - :doc:`Workload tuning ` --- .. meta:: :description: How to use model acceleration techniques and libraries to improve memory efficiency and performance. :keywords: ROCm, LLM, fine-tuning, usage, tutorial, Flash Attention, Hugging Face, xFormers, vLLM, PyTorch **************************** Model acceleration libraries **************************** This section discusses model acceleration techniques and libraries to improve memory efficiency and performance. .. _acceleration-flash-attention: Flash Attention 2 ================= Flash Attention is a technique designed to reduce memory movements between GPU SRAM and high-bandwidth memory (HBM). By using a tiling approach, Flash Attention 2 improves memory locality in the nested loops of query, key, and value computations within the Attention modules of LLMs. These modules include Multi-Head Attention (MHA), Group-Query Attention (GQA), and Multi-Query Attention (MQA). This reduction in memory movements significantly decreases the time-to-first-token (TTFT) latency for large batch sizes and long prompt sequences, thereby enhancing overall performance. .. image:: ../../../data/how-to/llm-fine-tuning-optimization/attention-module.png :alt: Attention module of a large language module utilizing tiling :align: center Installation prerequisites ---------------------------- Before installing Flash Attention 2, ensure the following are available: * ROCm-enabled PyTorch * Triton These can be installed by following the official `PyTorch installation guide `_. Alternatively, for a simpler setup, you can use a preconfigured :ref:`ROCm PyTorch Docker image `, which already includes the required libraries. Installing Flash Attention 2 ---------------------------- `Flash Attention `_ supports two backend implementations on AMD GPUs. * `Composable Kernel (CK) `__ - the default backend * `OpenAI Triton `__ - an alternative backend You can switch between these backends using the environment variable ``FLASH_ATTENTION_TRITON_AMD_ENABLE``: ``FLASH_ATTENTION_TRITON_AMD_ENABLE="FALSE"`` → Use Composable Kernel (CK) backend (Flash Attention 2) ``FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE"`` → Use OpenAI Triton backend (Flash Attention 2) To install Flash Attention 2, use the following commands: .. code-block:: shell git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention/ pip install ninja # To install the CK backend flash attention python setup.py install # To install the Triton backend flash attention FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE" python setup.py install # To install both CK and Triton backend flash attention FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && FLASH_ATTENTION_SKIP_CK_BUILD=FALSE python setup.py install For detailed installation instructions, see `Flash Attention `_. Benchmarking Flash Attention 2 ------------------------------ Benchmark scripts to evaluate the performance of Flash Attention 2 are stored in the ``flash-attention/benchmarks/`` directory. To benchmark the CK backend .. code-block:: shell cd flash-attention/benchmarks pip install transformers einops ninja python3 benchmark_flash_attention.py To benchmark the Triton backend .. code-block:: shell FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE" python3 benchmark_flash_attention.py Using Flash Attention 2 ----------------------- .. code-block:: python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model_name = "NousResearch/Llama-3.2-1B" tokenizer = AutoTokenizer.from_pretrained(model_name, dtype=torch.bfloat16, use_fast=False) inputs = tokenizer('Today is', return_tensors='pt').to(device) model_eager = AutoModelForCausalLM.from_pretrained(model_name, dtype=torch.bfloat16, attn_implementation="eager").cuda(device) model_ckFAv2 = AutoModelForCausalLM.from_pretrained(model_name, dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda(device) model_eager.generation_config.pad_token_id = model_eager.generation_config.eos_token_id model_ckFAv2.generation_config.pad_token_id = model_ckFAv2.generation_config.eos_token_id print("eager\n GQA: ", tokenizer.decode(model_eager.generate(**inputs, max_new_tokens=22)[0], skip_special_tokens=True, do_sample=False, num_beams=1)) print("ckFAv2\n GQA: ", tokenizer.decode(model_ckFAv2.generate(**inputs, max_new_tokens=22)[0], skip_special_tokens=True, do_sample=False, num_beams=1)) The outputs from eager mode and FlashAttention-2 are identical, although their performance behavior differs. .. code-block:: shell eager GQA: Today is the 10th anniversary of the 9/11 attacks. I remember that day like it was yesterday. ckFAv2 GQA: Today is the 10th anniversary of the 9/11 attacks. I remember that day like it was yesterday. xFormers ======== xFormers also improves the performance of attention modules. Although xFormers attention performs very similarly to Flash Attention 2 due to its tiling behavior of query, key, and value, it’s widely used for LLMs and Stable Diffusion models with the Hugging Face Diffusers library. Installing CK xFormers ---------------------- Use the following commands to install CK xFormers. .. code-block:: shell # Install from source git clone https://github.com/ROCm/xformers.git cd xformers/ git submodule update --init --recursive PYTORCH_ROCM_ARCH=gfx942 python setup.py install #Instinct MI300-series PyTorch built-in acceleration ============================= `PyTorch compilation mode `__ synthesizes the model into a graph and then lowers it to prime operators. These operators are compiled using TorchInductor, which uses OpenAI Triton as a building block for GPU acceleration. One advantage of PyTorch compilation mode is that its GPU kernels are written in Python, making modifying and extending them easier. PyTorch compilation mode often delivers higher performance, as model operations are fused before runtime, which allows for easy deployment of high-performance kernels. PyTorch compilation ------------------- To utilize the PyTorch compilation mode, specific layers of the model must be explicitly assigned as compilation targets. In the case of LLM, where autoregressive token decoding generates dynamically changing key/value sizes, limiting the key/value size to a static dimension, ``max_cache_length``, is necessary to utilize the performance benefits of the PyTorch compilation. .. code-block:: python # Sample script to run LLM with the static key-value cache and PyTorch compilation from transformers import AutoModelForCausalLM, AutoTokenizer, StaticCache import torch from typing import Optional import os device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") os.environ["TOKENIZERS_PARALLELISM"] = "false" model_name = "NousResearch/Meta-Llama-3-8B" prompts = [] for b in range(1): prompts.append("New york city is where " ) tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(device).eval() inputs = tokenizer(prompts, return_tensors="pt").to(model.device) def decode_one_tokens(model, cur_token, input_pos, cache_position): logits = model(cur_token, position_ids=input_pos, cache_position=cache_position, return_dict=False, use_cache=True)[0] new_token = torch.argmax(logits[:, -1], dim=-1)[:, None] return new_token batch_size, seq_length = inputs["input_ids"].shape # Static key-value cache max_cache_length = 1024 max_new_tokens = 10 model._setup_cache(StaticCache, batch_size, max_cache_len=max_cache_length) cache_position = torch.arange(seq_length, device=device) generated_ids = torch.zeros(batch_size, seq_length + max_new_tokens + 1, dtype=torch.int, device=device) generated_ids[:, cache_position] = inputs["input_ids"].to(device).to(torch.int) logits = model(**inputs, cache_position=cache_position, return_dict=False, use_cache=True)[0] next_token = torch.argmax(logits[:, -1], dim=-1)[:, None] # torch compilation decode_one_tokens = torch.compile(decode_one_tokens, mode="max-autotune-no-cudagraphs",fullgraph=True) generated_ids[:, seq_length] = next_token[:, 0] cache_position = torch.tensor([seq_length + 1], device=device) with torch.no_grad(): for _ in range(1, max_new_tokens): with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_mem_efficient=False, enable_math=True): next_token = decode_one_tokens(model, next_token.clone(), None, cache_position) generated_ids[:, cache_position] = next_token.int() cache_position += 1 .. _fine-tuning-llms-pytorch-tunableop: PyTorch TunableOp ------------------ ROCm PyTorch (2.2.0 and later) allows users to use high-performance ROCm GEMM kernel libraries through PyTorch's built-in TunableOp options. This enables users to automatically pick up the best-performing GEMM kernels from :doc:`rocBLAS ` and :doc:`hipBLASLt ` libraries during runtime. During warm-up runs or offline profiling steps, users can create a GEMM Table that enumerates the kernel information. During the model's run, the best-performing kernel substitutes ``torch.nn.functional.linear(input, weight, bias=None)`` with the kernel specified in the GEMM table. The `Tunable GitHub `_ page describes the options. .. code-block:: python # To turn on TunableOp, simply set this environment variable export PYTORCH_TUNABLEOP_ENABLED=1 # Python import torch import torch.nn as nn import torch.nn.functional as F A = torch.rand(100, 20, device="cuda") W = torch.rand(200, 20, device="cuda") Out = F.linear(A, W) print(Out.size()) # tunableop_results0.csv Validator,PT_VERSION,2.4.0 Validator,ROCM_VERSION,6.1.0.0-82-5fabb4c Validator,HIPBLASLT_VERSION,0.7.0-1549b021 Validator,GCN_ARCH_NAME,gfx942:sramecc+:xnack- Validator,ROCBLAS_VERSION,4.1.0-cefa4a9b-dirty GemmTunableOp_float_TN,tn_200_100_20,Gemm_Rocblas_32323,0.00669595 .. image:: ../../../data/how-to/llm-fine-tuning-optimization/tunableop.png :alt: GEMM and TunableOp :align: center Learn more about optimizing kernels with TunableOp in :ref:`Optimizing Triton kernels `. FBGEMM and FBGEMM_GPU ===================== FBGEMM (Facebook General Matrix Multiplication) is a low-precision, high-performance CPU kernel library for matrix-matrix multiplications and convolutions. It is used for server-side inference and as a back end for PyTorch quantized operators. FBGEMM offers optimized on-CPU performance for reduced precision calculations, strong performance on native tensor formats, and the ability to generate high-performance shape- and size-specific kernels at runtime. FBGEMM_GPU collects several high-performance PyTorch GPU operator libraries for use in training and inference. It provides efficient table-batched embedding functionality, data layout transformation, and quantization support. For more information about FBGEMM and FBGEMM_GPU, see the `PyTorch FBGEMM GitHub `_ and the `PyTorch FBGEMM documentation `_. The `Meta blog post about FBGEMM `_ provides additional background about the library. Installing FBGEMM_GPU ---------------------- Installing FBGEMM_GPU consists of the following steps: * Set up an isolated Miniconda environment * Install ROCm using Docker or the :doc:`package manager ` * Install the nightly `PyTorch `_ build * Complete the pre-build and build tasks .. note:: FBGEMM_GPU doesn't require the installation of FBGEMM. To optionally install FBGEMM, see the `FBGEMM install instructions `_. Set up the Miniconda environment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To install Miniconda, use the following commands. #. Install a `Miniconda environment `_ for reproducible builds. All subsequent commands run inside this environment. .. code-block:: shell export PLATFORM_NAME="$(uname -s)-$(uname -m)" # Set the Miniconda prefix directory miniconda_prefix=$HOME/miniconda # Download the Miniconda installer wget -q "https://repo.anaconda.com/miniconda/Miniconda3-latest-${PLATFORM_NAME}.sh" -O miniconda.sh # Run the installer bash miniconda.sh -b -p "$miniconda_prefix" -u # Load the shortcuts . ~/.bashrc # Run updates conda update -n base -c defaults -y conda #. Create a Miniconda environment with Python 3.12: .. code-block:: shell env_name= python_version=3.12 # Create the environment conda create -y --name ${env_name} python="${python_version}" # Upgrade PIP and pyOpenSSL package conda run -n ${env_name} pip install --upgrade pip conda run -n ${env_name} python -m pip install pyOpenSSL>22.1.0 #. Install additional build tools: .. code-block:: shell conda install -n ${env_name} -y \ click \ cmake \ hypothesis \ jinja2 \ make \ ncurses \ ninja \ numpy \ scikit-build \ wheel Install the ROCm components ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBGEMM_GPU can run in a ROCm Docker container or in conjunction with the full ROCm installation. The Docker method is recommended because it requires fewer steps and provides a stable environment. To run FBGEMM_GPU in the Docker container, pull the `Minimal Docker image for ROCm `_. This image includes all preinstalled ROCm packages required to integrate FBGEMM. To pull and run the ROCm Docker image, use this command: .. code-block:: shell # Run for ROCm 6.2.0 docker run -it --network=host --shm-size 16G --device=/dev/kfd --device=/dev/dri --group-add video \ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ipc=host rocm/rocm-terminal:6.2 /bin/bash .. note:: The `Full Docker image for ROCm `_, which includes all ROCm packages, can also be used. However, it results in a very large container, so the minimal Docker image is recommended. You can also install ROCm using the package manager. FBGEMM_GPU requires the installation of the full ROCm package. For more information, see :doc:`the ROCm installation guide `. The ROCm package also requires the :doc:`MIOpen ` component as a dependency. To install MIOpen, use the ``apt install`` command. .. code-block:: shell apt install hipify-clang miopen-hip miopen-hip-dev Install PyTorch ^^^^^^^^^^^^^^^^^^^^^^^ Install `PyTorch `_ using ``pip`` for the most reliable and consistent results. #. Install the nightly PyTorch build using ``pip``. .. code-block:: shell # Install the latest nightly, ROCm variant conda run -n ${env_name} pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/rocm6.2/ #. Ensure PyTorch loads correctly. Verify the version and variant of the installation using an ``import`` test. .. code-block:: shell # Ensure that the package loads properly conda run -n ${env_name} python -c "import torch.distributed" # Verify the version and variant of the installation conda run -n ${env_name} python -c "import torch; print(torch.__version__)" Perform the prebuild and build ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #. Clone the FBGEMM repository and the relevant submodules. Use ``pip`` to install the components in ``requirements.txt``. Run the following commands inside the Miniconda environment. .. code-block:: shell # Select a version tag FBGEMM_VERSION=v0.8.0 # Clone the repo along with its submodules git clone https://github.com/pytorch/FBGEMM.git --branch=v0.8.0 --recursive fbgemm_${FBGEMM_VERSION} # Install additional required packages for building and testing cd fbgemm_${FBGEMM_VERSION}/fbgemm_gpu pip install requirements.txt #. Clear the build cache to remove stale build information. .. code-block:: shell # !! Run in fbgemm_gpu/ directory inside the Conda environment !! python setup.py clean #. Set the wheel build variables, including the package name, Python version tag, and Python platform name. .. code-block:: shell # Set the package name depending on the build variant export package_name=fbgemm_gpu_rocm # Set the Python version tag. It should follow the convention `py`, # for example, Python 3.12 --> py312 export python_tag=py312 # Determine the processor architecture export ARCH=$(uname -m) # Set the Python platform name for the Linux case export python_plat_name="manylinux2014_${ARCH}" #. Build FBGEMM_GPU for the ROCm platform. Set ``ROCM_PATH`` to the path to your ROCm installation. Run these commands from the ``fbgemm_gpu/`` directory inside the Miniconda environment. .. code-block:: shell # !! Run in the fbgemm_gpu/ directory inside the Conda environment !! export ROCM_PATH=
# Build for the target architecture of the ROCm device installed on the machine (for example, 'gfx942;gfx90a') # See :doc:`The Linux system requirements <../../reference/system-requirements>` for a list of supported GPUs. export PYTORCH_ROCM_ARCH=$(${ROCM_PATH}/bin/rocminfo | grep -o -m 1 'gfx.*') # Build the wheel artifact only python setup.py bdist_wheel \ --package_variant=rocm \ --python-tag="${python_tag}" \ --plat-name="${python_plat_name}" \ -DHIP_ROOT_DIR="${ROCM_PATH}" \ -DCMAKE_C_FLAGS="-DTORCH_USE_HIP_DSA" \ -DCMAKE_CXX_FLAGS="-DTORCH_USE_HIP_DSA" # Build and install the library into the Conda environment python setup.py install \ --package_variant=rocm \ -DHIP_ROOT_DIR="${ROCM_PATH}" \ -DCMAKE_C_FLAGS="-DTORCH_USE_HIP_DSA" \ -DCMAKE_CXX_FLAGS="-DTORCH_USE_HIP_DSA" Post-build validation ---------------------- After building FBGEMM_GPU, run some verification checks to ensure the build is correct. Continue to run all commands inside the ``fbgemm_gpu/`` directory inside the Miniconda environment. #. The build process generates many build artifacts and C++ templates, so it is important to confirm no undefined symbols remain. .. code-block:: shell # !! Run in fbgemm_gpu/ directory inside the Conda environment !! # Locate the built .SO file fbgemm_gpu_lib_path=$(find . -name fbgemm_gpu_py.so) # Check that the undefined symbols don't include fbgemm_gpu-defined functions nm -gDCu "${fbgemm_gpu_lib_path}" | sort #. Verify the referenced version number of ``GLIBCXX`` and the presence of certain function symbols: .. code-block:: shell # !! Run in fbgemm_gpu/ directory inside the Conda environment !! # Locate the built .SO file fbgemm_gpu_lib_path=$(find . -name fbgemm_gpu_py.so) # Note the versions of GLIBCXX referenced by the .SO # The libstdc++.so.6 available on the install target must support these versions objdump -TC "${fbgemm_gpu_lib_path}" | grep GLIBCXX | sed 's/.*GLIBCXX_\([.0-9]*\).*/GLIBCXX_\1/g' | sort -Vu | cat # Test for the existence of a given function symbol in the .SO nm -gDC "${fbgemm_gpu_lib_path}" | grep " fbgemm_gpu::merge_pooled_embeddings(" nm -gDC "${fbgemm_gpu_lib_path}" | grep " fbgemm_gpu::jagged_2d_to_dense(" Testing FBGEMM ---------------------- FBGEMM includes tests and benchmarks to validate performance. To run these tests, you must use ROCm 5.7 or a more recent version on the host and container. To run FBGEMM tests, follow these instructions: .. code-block:: shell # !! Run inside the Conda environment !! # From the /fbgemm_gpu/ directory cd test export FBGEMM_TEST_WITH_ROCM=1 # Enable for debugging failed kernel executions export HIP_LAUNCH_BLOCKING=1 # Run the test python -m pytest -v -rsx -s -W ignore::pytest.PytestCollectionWarning split_table_batched_embeddings_test.py To run the FBGEMM_GPU ``uvm`` test, use these commands. These tests only support the AMD MI210 and more recent GPUs. .. code-block:: shell # Run this inside the Conda environment from the /fbgemm_gpu/ directory export HSA_XNACK=1 cd test python -m pytest -v -rsx -s -W ignore::pytest.PytestCollectionWarning ./uvm/uvm_test.py --- .. meta:: :description: How to use model quantization techniques to speed up inference. :keywords: ROCm, LLM, fine-tuning, usage, tutorial, quantization, Quark, GPTQ, transformers, bitsandbytes ***************************** Model quantization techniques ***************************** Quantization reduces the model size compared to its native full-precision version, making it easier to fit large models onto GPUs with limited memory usage. This section explains how to perform LLM quantization using AMD Quark, GPTQ and bitsandbytes on AMD Instinct hardware. .. _quantize-llms-quark: AMD Quark ========= `AMD Quark `_ offers the leading efficient and scalable quantization solution tailored to AMD Instinct GPUs. It supports ``FP8`` and ``INT8`` quantization for activations, weights, and KV cache, including ``FP8`` attention. For very large models, it employs a two-level ``INT4-FP8`` scheme—storing weights in ``INT4`` while computing with ``FP8``—for nearly 4× compression without sacrificing accuracy. Quark scales efficiently across multiple GPUs, efficiently handling ultra-large models like Llama-3.1-405B. Quantized ``FP8`` models like Llama, Mixtral, and Grok-1 are available under the `AMD organization on Hugging Face `_, and can be deployed directly via `vLLM `_. Installing Quark ------------------- The latest release of Quark can be installed with pip .. code-block:: shell pip install amd-quark For detailed installation instructions, refer to the `Quark documentation `_. Using Quark for quantization ----------------------------- #. First, load the pre-trained model and its corresponding tokenizer using the Hugging Face ``transformers`` library. .. code-block:: python from transformers import AutoTokenizer, AutoModelForCausalLM MODEL_ID = "meta-llama/Llama-2-70b-chat-hf" MAX_SEQ_LEN = 512 model = AutoModelForCausalLM.from_pretrained( MODEL_ID, device_map="auto", torch_dtype="auto", ) model.eval() tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, model_max_length=MAX_SEQ_LEN) tokenizer.pad_token = tokenizer.eos_token #. Prepare the calibration DataLoader (static quantization requires calibration data). .. code-block:: python from datasets import load_dataset from torch.utils.data import DataLoader BATCH_SIZE = 1 NUM_CALIBRATION_DATA = 512 dataset = load_dataset("mit-han-lab/pile-val-backup", split="validation") text_data = dataset["text"][:NUM_CALIBRATION_DATA] tokenized_outputs = tokenizer( text_data, return_tensors="pt", padding=True, truncation=True, max_length=MAX_SEQ_LEN ) calib_dataloader = DataLoader( tokenized_outputs['input_ids'], batch_size=BATCH_SIZE, drop_last=True ) #. Define the quantization configuration. See the comments in the following code snippet for descriptions of each configuration option. .. code-block:: python from quark.torch.quantization import (Config, QuantizationConfig, FP8E4M3PerTensorSpec) # Define fp8/per-tensor/static spec. FP8_PER_TENSOR_SPEC = FP8E4M3PerTensorSpec(observer_method="min_max", is_dynamic=False).to_quantization_spec() # Define global quantization config, input tensors and weight apply FP8_PER_TENSOR_SPEC. global_quant_config = QuantizationConfig(input_tensors=FP8_PER_TENSOR_SPEC, weight=FP8_PER_TENSOR_SPEC) # Define quantization config for kv-cache layers, output tensors apply FP8_PER_TENSOR_SPEC. KV_CACHE_SPEC = FP8_PER_TENSOR_SPEC kv_cache_layer_names_for_llama = ["*k_proj", "*v_proj"] kv_cache_quant_config = {name : QuantizationConfig(input_tensors=global_quant_config.input_tensors, weight=global_quant_config.weight, output_tensors=KV_CACHE_SPEC) for name in kv_cache_layer_names_for_llama} layer_quant_config = kv_cache_quant_config.copy() EXCLUDE_LAYERS = ["lm_head"] quant_config = Config( global_quant_config=global_quant_config, layer_quant_config=layer_quant_config, kv_cache_quant_config=kv_cache_quant_config, exclude=EXCLUDE_LAYERS) #. Quantize the model and export .. code-block:: python import torch from quark.torch import ModelQuantizer, ModelExporter from quark.torch.export import ExporterConfig, JsonExporterConfig # Apply quantization. quantizer = ModelQuantizer(quant_config) quant_model = quantizer.quantize_model(model, calib_dataloader) # Freeze quantized model to export. freezed_model = quantizer.freeze(model) # Define export config. LLAMA_KV_CACHE_GROUP = ["*k_proj", "*v_proj"] export_config = ExporterConfig(json_export_config=JsonExporterConfig()) export_config.json_export_config.kv_cache_group = LLAMA_KV_CACHE_GROUP EXPORT_DIR = MODEL_ID.split("/")[1] + "-w-fp8-a-fp8-kvcache-fp8-pertensor" exporter = ModelExporter(config=export_config, export_dir=EXPORT_DIR) with torch.no_grad(): exporter.export_safetensors_model(freezed_model, quant_config=quant_config, tokenizer=tokenizer) Evaluating the quantized model with vLLM ---------------------------------------- The exported Quark-quantized model can be loaded directly by vLLM for inference. You need to specify the model path and inform vLLM about the quantization method (``quantization='quark'``) and the KV cache data type (``kv_cache_dtype='fp8'``). Use the ``LLM`` interface to load the model: .. code-block:: python from vllm import LLM, SamplingParamsinterface # Sample prompts. prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] # Create a sampling params object. sampling_params = SamplingParams(temperature=0.8, top_p=0.95) # Create an LLM. llm = LLM(model="Llama-2-70b-chat-hf-w-fp8-a-fp8-kvcache-fp8-pertensor", kv_cache_dtype='fp8',quantization='quark') # Generate texts from the prompts. The output is a list of RequestOutput objects # that contain the prompt, generated text, and other information. outputs = llm.generate(prompts, sampling_params) # Print the outputs. print("\nGenerated Outputs:\n" + "-" * 60) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}") print(f"Output: {generated_text!r}") print("-" * 60) You can also evaluate the quantized model's accuracy on standard benchmarks using the `lm-evaluation-harness `_. Pass the necessary vLLM arguments to ``lm_eval`` via ``--model_args``. .. code-block:: shell lm_eval --model vllm \ --model_args pretrained=Llama-2-70b-chat-hf-w-fp8-a-fp8-kvcache-fp8-pertensor,kv_cache_dtype='fp8',quantization='quark' \ --tasks gsm8k This provides a standardized way to measure the performance impact of quantization. .. _fine-tune-llms-gptq: GPTQ ==== GPTQ is a post-training quantization technique where each row of the weight matrix is quantized independently to find a version of the weights that minimizes error. These weights are quantized to ``int4`` but are restored to ``fp16`` on the fly during inference. This can save your memory usage by a factor of four. A speedup in inference is expected because inference of GPTQ models uses a lower bit width, which takes less time to communicate. Before setting up the GPTQ configuration in Transformers, ensure the `AutoGPTQ `_ library is installed. Installing AutoGPTQ ------------------- The AutoGPTQ library implements the GPTQ algorithm. #. Use the following command to install the latest stable release of AutoGPTQ from pip. .. code-block:: shell # This will install pre-built wheel for a specific ROCm version. pip install auto-gptq --no-build-isolation --extra-index-url https://huggingface.github.io/autogptq-index/whl/rocm573/ Or, install AutoGPTQ from source for the appropriate ROCm version (for example, ROCm 6.1). .. code-block:: shell # Clone the source code. git clone https://github.com/AutoGPTQ/AutoGPTQ.git cd AutoGPTQ # Speed up the compilation by specifying PYTORCH_ROCM_ARCH to target device. PYTORCH_ROCM_ARCH=gfx942 ROCM_VERSION=6.1 pip install . # Show the package after the installation #. Run ``pip show auto-gptq`` to print information for the installed ``auto-gptq`` package. Its output should look like this: .. code-block:: shell Name: auto-gptq Version: 0.8.0.dev0+rocm6.1 ... Using GPTQ with AutoGPTQ ------------------------ #. Run the following code snippet. .. code-block:: python from transformers import AutoTokenizer, TextGenerationPipeline from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig base_model_name = "NousResearch/Llama-2-7b-hf" quantized_model_name = "llama-2-7b-hf-gptq" tokenizer = AutoTokenizer.from_pretrained(base_model_name, use_fast=True) examples = [ tokenizer( "auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm." ) ] print(examples) The resulting examples should be a list of dictionaries whose keys are ``input_ids`` and ``attention_mask``. #. Set up the quantization configuration using the following snippet. .. code-block:: python quantize_config = BaseQuantizeConfig( bits=4, # quantize model to 4-bit group_size=128, # it is recommended to set the value to 128 desc_act=False, ) #. Load the non-quantized model using the AutoGPTQ class and run the quantization. .. code-block:: python # Import auto_gptq class. from auto_gptq import AutoGPTQForCausalLM # Load non-quantized model. base_model = AutoGPTQForCausalLM.from_pretrained(base_model_name, quantize_config, device_map = "auto") base_model.quantize(examples) # Save quantized model. base_model.save_quantized(quantized_model_name) Using GPTQ with Hugging Face Transformers ------------------------------------------ #. To perform a GPTQ quantization using Hugging Face Transformers, you need to create a ``GPTQConfig`` instance and set the number of bits to quantize to, and a dataset to calibrate the weights. .. code-block:: python from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig base_model_name = " NousResearch/Llama-2-7b-hf" tokenizer = AutoTokenizer.from_pretrained(base_model_name) gptq_config = GPTQConfig(bits=4, dataset="c4", tokenizer=tokenizer) #. Load a model to quantize using ``AutoModelForCausalLM`` and pass the ``gptq_config`` to its ``from_pretained`` method. Set ``device_map=”auto”`` to automatically offload the model to available GPU resources. .. code-block:: python quantized_model = AutoModelForCausalLM.from_pretrained( base_model_name, device_map="auto", quantization_config=gptq_config) #. Once the model is quantized, you can push the model and tokenizer to Hugging Face Hub for easy share and access. .. code-block:: python quantized_model.push_to_hub("llama-2-7b-hf-gptq") tokenizer.push_to_hub("llama-2-7b-hf-gptq") Or, you can save the model locally using the following snippet. .. code-block:: python quantized_model.save_pretrained("llama-2-7b-gptq") tokenizer.save_pretrained("llama-2-7b-gptq") ExLlama-v2 support ------------------ ExLlama is a Python/C++/CUDA implementation of the Llama model that is designed for faster inference with 4-bit GPTQ weights. The ExLlama kernel is activated by default when users create a ``GPTQConfig`` object. To boost inference speed even further on Instinct GPUs, use the ExLlama-v2 kernels by configuring the ``exllama_config`` parameter as the following. .. code-block:: python from transformers import AutoModelForCausalLM, GPTQConfig #pretrained_model_dir = "meta-llama/Llama-2-7b" base_model_name = "NousResearch/Llama-2-7b-hf" gptq_config = GPTQConfig(bits=4, dataset="c4", exllama_config={"version":2}) quantized_model = AutoModelForCausalLM.from_pretrained( base_model_name, device_map="auto", quantization_config=gptq_config) bitsandbytes ============ The `ROCm-aware bitsandbytes `_ library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizer, matrix multiplication, and 8-bit and 4-bit quantization functions. The library includes quantization primitives for 8-bit and 4-bit operations through ``bitsandbytes.nn.Linear8bitLt`` and ``bitsandbytes.nn.Linear4bit`` and 8-bit optimizers through the ``bitsandbytes.optim`` module. These modules are supported on AMD Instinct GPUs. Installing bitsandbytes ----------------------- #. To install bitsandbytes for ROCm 6.0 (and later), use the following commands. .. code-block:: shell # Clone the github repo git clone --recurse https://github.com/ROCm/bitsandbytes.git cd bitsandbytes git checkout rocm_enabled_multi_backend # Install dependencies pip install -r requirements-dev.txt # Use -DBNB_ROCM_ARCH to specify target GPU arch cmake -DBNB_ROCM_ARCH="gfx942" -DCOMPUTE_BACKEND=hip -S . # Compile the project make # Install python setup.py install #. Run ``pip show bitsandbytes`` to show the information about the installed bitsandbytes package. Its output should look like the following. .. code-block:: shell Name: bitsandbytes Version: 0.44.0.dev0 ... Using bitsandbytes primitives ----------------------------- To get started with bitsandbytes primitives, use the following code as reference. .. code-block:: python import bitsandbytes as bnb # Use Int8 Matrix Multiplication bnb.matmul(..., threshold=6.0) # Use bitsandbytes 8-bit Optimizers adam = bnb.optim.Adam8bit(model.parameters(), lr=0.001, betas=(0.9, 0.995)) Using bitsandbytes with Hugging Face Transformers ------------------------------------------------- To load a Transformers model in 4-bit, set ``load_in_4bit=true`` in ``BitsAndBytesConfig``. .. code-block:: python from transformers import AutoModelForCausalLM, BitsAndBytesConfig base_model_name = "NousResearch/Llama-2-7b-hf" quantization_config = BitsAndBytesConfig(load_in_4bit=True) bnb_model_4bit = AutoModelForCausalLM.from_pretrained( base_model_name, device_map="auto", quantization_config=quantization_config) # Check the memory footprint with get_memory_footprint method print(bnb_model_4bit.get_memory_footprint()) To load a model in 8-bit for inference, use the ``load_in_8bit`` option. .. code-block:: python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig base_model_name = "NousResearch/Llama-2-7b-hf" tokenizer = AutoTokenizer.from_pretrained(base_model_name) quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(base_model_name) bnb_model_8bit = AutoModelForCausalLM.from_pretrained( base_model_name, device_map="auto", quantization_config=quantization_config) prompt = "What is a large language model?" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) --- .. meta:: :description: How to optimize Triton kernels for ROCm. :keywords: ROCm, LLM, fine-tuning, usage, MI300X, tutorial, Triton, kernel, performance, optimization ************************* Optimizing Triton kernels ************************* This section introduces the general steps for `Triton `_ kernel optimization. Broadly, Triton kernel optimization is similar to :doc:`HIP ` and CUDA kernel optimization. Refer to the :ref:`Triton kernel performance optimization ` section of the :doc:`workload` guide for detailed information. Triton kernel performance optimization includes the following topics. * :ref:`mi300x-autotunable-kernel-config` * :ref:`mi300x-mlir-analysis` * :ref:`mi300x-assembly-analysis` * :ref:`mi300x-torchinductor-tuning` * :ref:`mi300x-compute-kernel-occ` --- .. meta:: :description: How to use ROCm profiling and debugging tools. :keywords: ROCm, LLM, fine-tuning, usage, MI300X, tutorial, profiling, debugging, performance, Triton *********************** Profiling and debugging *********************** This section provides an index for further documentation on profiling and debugging tools and their common usage patterns. See :ref:`AMD Instinct MI300X™ workload optimization ` for a conceptual summary of the workload profiling workflow for ROCm applications on AMD hardware -- including fine-tuning LLMs. There, you'll find information on higher-level and kernel-level profiling tools as well as other profiling and debugging suggestions. * :ref:`PyTorch Profiler ` * :ref:`ROCm profiling tools ` * :ref:`ROCProfiler ` * :ref:`ROCm Compute Profiler ` * :ref:`ROCm Systems Profiler ` * :ref:`ROCr Debug Agent ` --- .. meta:: :description: Learn about vLLM V1 inference tuning on AMD Instinct GPUs for optimal performance. :keywords: AMD, Instinct, MI300X, HPC, tuning, BIOS settings, NBIO, ROCm, environment variable, performance, HIP, Triton, PyTorch TunableOp, vLLM, RCCL, MIOpen, GPU, resource utilization .. _mi300x-vllm-optimization: .. _vllm-optimization: ******************************** vLLM V1 performance optimization ******************************** This guide helps you maximize vLLM throughput and minimize latency on AMD Instinct MI300X, MI325X, MI350X, and MI355X GPUs. Learn how to: * Enable AITER (AI Tensor Engine for ROCm) for speedups on LLM models. * Configure environment variables for optimal HIP, RCCL, and Quick Reduce performance. * Select the right attention backend for your workload (AITER MHA/MLA vs. Triton). * Choose parallelism strategies (tensor, pipeline, data, expert) for multi-GPU deployments. * Apply quantization (``FP8``/``FP4``) to reduce memory usage by 2-4× with minimal accuracy loss. * Tune engine arguments (batch size, memory utilization, graph modes) for your use case. * Benchmark and scale across single-node and multi-node configurations. Performance environment variables ================================= The following variables are generally useful for Instinct MI300X/MI355X GPUs and vLLM: * **HIP and math libraries** * ``export HIP_FORCE_DEV_KERNARG=1`` — improves kernel launch performance by forcing device kernel arguments. This is already set by default in :doc:`vLLM ROCm Docker images `. Bare-metal users should set this manually. * ``export TORCH_BLAS_PREFER_HIPBLASLT=1`` — explicitly prefers hipBLASLt over hipBLAS for GEMM operations. By default, PyTorch uses heuristics to choose the best BLAS library. Setting this can improve linear layer performance in some workloads. * **RCCL (collectives for multi-GPU)** * ``export NCCL_MIN_NCHANNELS=112`` — increases RCCL channels from default (typically 32-64) to 112 on the Instinct MI300X. **Only beneficial for multi-GPU distributed workloads** (tensor parallelism, pipeline parallelism). Single-GPU inference does not need this. .. _vllm-optimization-aiter-switches: AITER (AI Tensor Engine for ROCm) switches ========================================== AITER (AI Tensor Engine for ROCm) provides ROCm-specific fused kernels optimized for Instinct MI350 Series and MI300X GPUs in vLLM V1. How AITER flags work: * ``VLLM_ROCM_USE_AITER`` is the master switch (defaults to ``False``/``0``). * Individual feature flags (``VLLM_ROCM_USE_AITER_LINEAR``, ``VLLM_ROCM_USE_AITER_MOE``, and so on) default to ``True`` but only activate when the master switch is enabled. * To enable a specific AITER feature, you must set both ``VLLM_ROCM_USE_AITER=1`` and the specific feature flag to ``1``. Quick start examples: .. code-block:: bash # Enable all AITER optimizations (recommended for most workloads) export VLLM_ROCM_USE_AITER=1 vllm serve MODEL_NAME # Enable AITER Fused MoE and enable Triton Prefill-Decode (split) attention export VLLM_ROCM_USE_AITER=1 export VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1 export VLLM_ROCM_USE_AITER_MHA=0 vllm serve MODEL_NAME # Disable AITER entirely (i.e, use vLLM Triton Unified Attention Kernel) export VLLM_ROCM_USE_AITER=0 vllm serve MODEL_NAME .. list-table:: :header-rows: 1 :widths: 30 70 * - Environment variable - Description (default behavior) * - ``VLLM_ROCM_USE_AITER`` - Master switch to enable AITER kernels (``0``/``False`` by default). All other ``VLLM_ROCM_USE_AITER_*`` flags require this to be set to ``1``. * - ``VLLM_ROCM_USE_AITER_LINEAR`` - Use AITER quantization operators + GEMM for linear layers (defaults to ``True`` when AITER is on). Accelerates matrix multiplications in all transformer layers. **Recommended to keep enabled**. * - ``VLLM_ROCM_USE_AITER_MOE`` - Use AITER fused-MoE kernels (defaults to ``True`` when AITER is on). Accelerates Mixture-of-Experts routing and computation. See the note on :ref:`AITER MoE requirements `. * - ``VLLM_ROCM_USE_AITER_RMSNORM`` - Use AITER RMSNorm kernels (defaults to ``True`` when AITER is on). Accelerates normalization layers. **Recommended: keep enabled.** * - ``VLLM_ROCM_USE_AITER_MLA`` - Use AITER Multi-head Latent Attention for supported models, for example, DeepSeek-V3/R1 (defaults to ``True`` when AITER is on). See the section on :ref:`AITER MLA requirements `. * - ``VLLM_ROCM_USE_AITER_MHA`` - Use AITER Multi-Head Attention kernels (defaults to ``True`` when AITER is on; set to ``0`` to use Triton attention backends and Prefill-Decode attention backend instead). See :ref:`attention backend selection `. * - ``VLLM_ROCM_USE_AITER_UNIFIED_ATTENTION`` - Enable AITER's optimized unified attention kernel (defaults to ``False``). Only takes effect when: AITER is enabled; unified attention mode is active (``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=0``); and AITER MHA is disabled (``VLLM_ROCM_USE_AITER_MHA=0``). When disabled, falls back to vLLM's Triton unified attention. * - ``VLLM_ROCM_USE_AITER_FP8BMM`` - Use AITER ``FP8`` batched matmul (defaults to ``True`` when AITER is on). Fuses ``FP8`` per-token quantization with batched GEMM (used in MLA models like DeepSeek-V3). Requires an Instinct MI300X/MI355X GPU. * - ``VLLM_ROCM_USE_SKINNY_GEMM`` - Prefer skinny-GEMM kernel variants for small batch sizes (defaults to ``True``). Improves performance when ``M`` dimension is small. **Recommended to keep enabled**. * - ``VLLM_ROCM_FP8_PADDING`` - Pad ``FP8`` linear weight tensors to improve memory locality (defaults to ``True``). Minor memory overhead for better performance. * - ``VLLM_ROCM_MOE_PADDING`` - Pad MoE weight tensors for better memory access patterns (defaults to ``True``). Same memory/performance tradeoff as ``FP8`` padding. * - ``VLLM_ROCM_CUSTOM_PAGED_ATTN`` - Use custom paged-attention decode kernel when Prefill-Decode attention backend is selected (defaults to ``True``). See :ref:`Attention backend selection with AITER `. .. note:: When ``VLLM_ROCM_USE_AITER=1``, most AITER component flags (``LINEAR``, ``MOE``, ``RMSNORM``, ``MLA``, ``MHA``, ``FP8BMM``) automatically default to ``True``. You typically only need to set the master switch ``VLLM_ROCM_USE_AITER=1`` to enable all optimizations. ROCm provides a prebuilt optimized Docker image for validating the performance of LLM inference with vLLM on MI300X Series GPUs. The Docker image includes ROCm, vLLM, and PyTorch. For more information, see :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm`. .. _vllm-optimization-aiter-moe-requirements: AITER MoE requirements (Mixtral, DeepSeek-V2/V3, Qwen-MoE models) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``VLLM_ROCM_USE_AITER_MOE`` enables AITER's optimized Mixture-of-Experts kernels, such as expert routing (topk selection) and expert computation for better performance. Applicable models: * Mixtral series: for example, Mixtral-8x7B / Mixtral-8x22B * Llama-4 family: for example, Llama-4-Scout-17B-16E / Llama-4-Maverick-17B-128E * DeepSeek family: DeepSeek-V2 / DeepSeek-V3 / DeepSeek-R1 * Qwen family: Qwen1.5-MoE / Qwen2-MoE / Qwen2.5-MoE series * Other MoE architectures When to enable: * **Enable (default):** For all MoE models on the Instinct MI300X/MI355X for best throughput * **Disable:** Only for debugging or if you encounter numerical issues Example usage: .. code-block:: bash # Standard MoE model (Mixtral) VLLM_ROCM_USE_AITER=1 vllm serve mistralai/Mixtral-8x7B-Instruct-v0.1 # Hybrid MoE+MLA model (DeepSeek-V3) - requires both MOE and MLA flags VLLM_ROCM_USE_AITER=1 vllm serve deepseek-ai/DeepSeek-V3 \ --block-size 1 \ --tensor-parallel-size 8 .. _vllm-optimization-aiter-mla-requirements: AITER MLA requirements (DeepSeek-V3/R1 models) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``VLLM_ROCM_USE_AITER_MLA`` enables AITER MLA (Multi-head Latent Attention) optimization for supported models. Defaults to **True** when AITER is on. Critical requirement: * **Must** explicitly set ``--block-size 1`` .. important:: If you omit ``--block-size 1``, vLLM will raise an error rather than defaulting to 1. Applicable models: * DeepSeek-V3 / DeepSeek-R1 * DeepSeek-V2 * Other models using multi-head latent attention (MLA) architecture Example usage: .. code-block:: bash # DeepSeek-R1 with AITER MLA (requires 8 GPUs) VLLM_ROCM_USE_AITER=1 vllm serve deepseek-ai/DeepSeek-R1 \ --block-size 1 \ --tensor-parallel-size 8 .. _vllm-optimization-aiter-backend-selection: Attention backend selection with AITER ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Understanding which attention backend to use helps optimize your deployment. Quick reference: Which attention backend will I get? Default behavior (no configuration) Without setting any environment variables, vLLM uses: * **vLLM Triton Unified Attention** — A single Triton kernel handling both prefill and decode phases * Works on all ROCm platforms * Good baseline performance **Recommended**: Enable AITER (set ``VLLM_ROCM_USE_AITER=1``) When you enable AITER, the backend is automatically selected based on your model: .. code-block:: text Is your model using MLA architecture? (DeepSeek-V3/R1/V2) ├─ YES → AITER MLA Backend │ • Requires --block-size 1 │ • Best performance for MLA models │ • Automatically selected │ └─ NO → AITER MHA Backend • For standard transformer models (Llama, Mistral, etc.) • Optimized for Instinct MI300X/MI355X • Automatically selected **Advanced**: Manual backend selection Most users won't need this, but you can override the defaults: .. list-table:: :widths: 40 60 :header-rows: 1 * - To use this backend - Set these flags * - AITER MLA (MLA models only) - ``VLLM_ROCM_USE_AITER=1`` (auto-selects for DeepSeek-V3/R1) * - AITER MHA (standard models) - ``VLLM_ROCM_USE_AITER=1`` (auto-selects for non-MLA models) * - vLLM Triton Unified (default) - ``VLLM_ROCM_USE_AITER=0`` (or unset) * - Triton Prefill-Decode (split) without AITER - | ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1`` * - Triton Prefill-Decode (split) along with AITER Fused-MoE - | ``VLLM_ROCM_USE_AITER=1`` | ``VLLM_ROCM_USE_AITER_MHA=0`` | ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1`` * - AITER Unified Attention - | ``VLLM_ROCM_USE_AITER=1`` | ``VLLM_ROCM_USE_AITER_MHA=0`` | ``VLLM_ROCM_USE_AITER_UNIFIED_ATTENTION=1`` **Quick start examples**: .. code-block:: bash # Recommended: Standard model with AITER (Llama, Mistral, Qwen, etc.) VLLM_ROCM_USE_AITER=1 vllm serve meta-llama/Llama-3.3-70B-Instruct # MLA model with AITER (DeepSeek-V3/R1) VLLM_ROCM_USE_AITER=1 vllm serve deepseek-ai/DeepSeek-R1 \ --block-size 1 \ --tensor-parallel-size 8 # Advanced: Use Prefill-Decode split (for short input cases) with AITER Fused-MoE VLLM_ROCM_USE_AITER=1 \ VLLM_ROCM_USE_AITER_MHA=0 \ VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1 \ vllm serve meta-llama/Llama-4-Scout-17B-16E **Which backend should I choose?** .. list-table:: :widths: 30 70 :header-rows: 1 * - Your use case - Recommended backend * - **Standard transformer models** (Llama, Mistral, Qwen, Mixtral) - **AITER MHA** (``VLLM_ROCM_USE_AITER=1``) — **Recommended for most workloads** on Instinct MI300X/MI355X. Provides optimized attention kernels for both prefill and decode phases. * - **MLA models** (DeepSeek-V3/R1/V2) - **AITER MLA** (auto-selected with ``VLLM_ROCM_USE_AITER=1``) — Required for optimal performance, must use ``--block-size 1`` * - **gpt-oss models** (gpt-oss-120b/20b) - **AITER Unified Attention** (``VLLM_ROCM_USE_AITER=1``, ``VLLM_ROCM_USE_AITER_MHA=0``, ``VLLM_ROCM_USE_AITER_UNIFIED_ATTENTION=1``) — Required for optimal performance * - **Debugging or compatibility** - **vLLM Triton Unified** (default with ``VLLM_ROCM_USE_AITER=0``) — Generic fallback, works everywhere **Important notes:** * **AITER MHA and AITER MLA are mutually exclusive** — vLLM automatically detects MLA models and selects the appropriate backend * **For 95% of users:** Simply set ``VLLM_ROCM_USE_AITER=1`` and let vLLM choose the right backend * When in doubt, start with AITER enabled (the recommended configuration) and profile your specific workload Backend choice quick recipes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * **Standard transformers (any prompt length):** Start with ``VLLM_ROCM_USE_AITER=1`` → AITER MHA. For CUDA graph modes, see architecture-specific guidance below (Dense vs MoE models have different optimal modes). * **Latency-sensitive chat (low TTFT):** keep ``--max-num-batched-tokens`` ≤ **8k–16k** with AITER. * **Streaming decode (low ITL):** raise ``--max-num-batched-tokens`` to **32k–64k**. * **Offline max throughput:** ``--max-num-batched-tokens`` ≥ **32k** with ``cudagraph_mode=FULL``. **How to verify which backend is active** Check vLLM's startup logs to confirm which attention backend is being used: .. code-block:: bash # Start vLLM and check logs VLLM_ROCM_USE_AITER=1 vllm serve meta-llama/Llama-3.3-70B-Instruct 2>&1 | grep -i attention **Expected log messages:** * AITER MHA: ``Using Aiter Flash Attention backend on V1 engine.`` * AITER MLA: ``Using AITER MLA backend on V1 engine.`` * vLLM Triton MLA: ``Using Triton MLA backend on V1 engine.`` * vLLM Triton Unified: ``Using Triton Attention backend on V1 engine.`` * AITER Triton Unified: ``Using Aiter Unified Attention backend on V1 engine.`` * AITER Triton Prefill-Decode: ``Using Rocm Attention backend on V1 engine.`` Attention backend technical details ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This section provides technical details about vLLM's attention backends on ROCm. vLLM V1 on ROCm provides these attention implementations: 1. **vLLM Triton Unified Attention** (default when AITER is **off**) * Single unified Triton kernel handling both chunked prefill and decode phases * Generic implementation that works across all ROCm platforms * Good baseline performance * Automatically selected when ``VLLM_ROCM_USE_AITER=0`` (or unset) * Supports GPT-OSS 2. **AITER Triton Unified Attention** (advanced, requires manual configuration) * The AMD optimized unified Triton kernel * Enable with ``VLLM_ROCM_USE_AITER=1``, ``VLLM_ROCM_USE_AITER_MHA=0``, and ``VLLM_ROCM_USE_AITER_UNIFIED_ATTENTION=1``. * Only useful for specific workloads. Most users should use AITER MHA instead. * Recommended this backend when running GPT-OSS. 3. **AITER Triton Prefill–Decode Attention** (hybrid, Instinct MI300X-optimized) * Enable with ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1`` * Uses separate kernels for prefill and decode phases: * **Prefill**: ``context_attention_fwd`` Triton kernel * **Primary decode**: ``torch.ops._rocm_C.paged_attention`` (custom ROCm kernel optimized for head sizes 64/128, block sizes 16/32, GQA 1–16, context ≤131k; sliding window not supported) * **Fallback decode**: ``kernel_paged_attention_2d`` Triton kernel when shapes don't meet primary decode requirements * Usually better compared to unified Triton kernels * Performance vs AITER MHA varies: AITER MHA is typically faster overall, but Prefill-Decode split may win in short input scenarios * The custom paged attention decode kernel is controlled by ``VLLM_ROCM_CUSTOM_PAGED_ATTN`` (default **True**) 4. **AITER Multi-Head Attention (MHA)** (default when AITER is **on**) * Controlled by ``VLLM_ROCM_USE_AITER_MHA`` (**1** = enabled) * Best all-around performance for standard transformer models * Automatically selected when ``VLLM_ROCM_USE_AITER=1`` and model is not MLA 5. **vLLM Triton Multi-head Latent Attention (MLA)** (for DeepSeek-V3/R1/V2) * Automatically selected when ``VLLM_ROCM_USE_AITER=0`` (or unset) 6. **AITER Multi-head Latent Attention (MLA)** (for DeepSeek-V3/R1/V2) * Controlled by ``VLLM_ROCM_USE_AITER_MLA`` (``1`` = enabled) * Required for optimal performance on MLA architecture models * Automatically selected when ``VLLM_ROCM_USE_AITER=1`` and model uses MLA * Requires ``--block-size 1`` Quick Reduce (large all-reduces on ROCm) ======================================== **Quick Reduce** is an alternative to RCCL/custom all-reduce for **large** inputs (MI300-class GPUs). It supports FP16/BF16 as well as symmetric INT8/INT6/INT4 quantized all-reduce (group size 32). .. warning:: Quantization can affect accuracy. Validate quality before deploying. Control via: * ``VLLM_ROCM_QUICK_REDUCE_QUANTIZATION`` ∈ ``["NONE","FP","INT8","INT6","INT4"]`` (default ``NONE``). * ``VLLM_ROCM_QUICK_REDUCE_CAST_BF16_TO_FP16``: cast BF16 input to FP16 (``1/True`` by default for performance). * ``VLLM_ROCM_QUICK_REDUCE_MAX_SIZE_BYTES_MB``: cap the preset buffer (default ``NONE`` ≈ ``2048`` MB). Quick Reduce tends to help **throughput** at higher TP counts (for example, 4–8) with many concurrent requests. Parallelism strategies (run vLLM on multiple GPUs) ================================================== vLLM supports the following parallelism strategies: 1. Tensor parallelism 2. Pipeline parallelism 3. Data parallelism 4. Expert parallelism For more details, see `Parallelism and scaling `_. **Choosing the right strategy:** * **Tensor Parallelism (TP)**: Use when model doesn't fit on one GPU. Prefer staying within a single XGMI island (≤8 GPUs on the Instinct MI300X). * **Pipeline Parallelism (PP)**: Use for very large models across nodes. Set TP to GPUs per node, scale with PP across nodes. * **Data Parallelism (DP)**: Use when model fits on single GPU or TP group, and you need higher throughput. Combine with TP/PP for large models. * **Expert Parallelism (EP)**: Use for MoE models with ``--enable-expert-parallel``. More efficient than TP for MoE layers. Tensor parallelism ^^^^^^^^^^^^^^^^^^ Tensor parallelism splits each layer of the model weights across multiple GPUs when the model doesn't fit on a single GPU. This is primarily for memory capacity. **Use tensor parallelism when:** * Model does not fit on one GPU (OOM) * Need to enable larger batch sizes by distributing KV cache across GPUs **Examples:** .. code-block:: bash # Tensor parallelism: Split model across 2 GPUs vllm serve /path/to/model --dtype float16 --tensor-parallel-size 2 # Combining TP and two vLLM instance, each split across 2 GPUs (4 GPUs total) CUDA_VISIBLE_DEVICES=0,1 vllm serve /path/to/model --dtype float16 --tensor-parallel-size 2 --port 8000 CUDA_VISIBLE_DEVICES=2,3 vllm serve /path/to/model --dtype float16 --tensor-parallel-size 2 --port 8001 .. note:: **ROCm GPU visibility:** vLLM on ROCm reads ``CUDA_VISIBLE_DEVICES``. Keep ``HIP_VISIBLE_DEVICES`` unset to avoid conflicts. .. tip:: For structured data parallelism deployments with load balancing, see :ref:`data-parallelism-section`. Pipeline parallelism ^^^^^^^^^^^^^^^^^^^^ Pipeline parallelism splits the model's layers across multiple GPUs or nodes, with each GPU processing different layers sequentially. This is primarily used for multi-node deployments where the model is too large for a single node. **Use pipeline parallelism when:** * Model is too large for a single node (combine PP with TP) * GPUs on a node lack high-speed interconnect (e.g., no NVLink/XGMI) - PP may perform better than TP * GPU count doesn't evenly divide the model (PP supports uneven splits) **Common pattern for multi-node:** .. code-block:: bash # 2 nodes × 8 GPUs = 16 GPUs total # TP=8 per node, PP=2 across nodes vllm serve meta-llama/Llama-3.1-405B-Instruct \ --tensor-parallel-size 8 \ --pipeline-parallel-size 2 .. note:: **ROCm best practice**: On the Instinct MI300X, prefer staying within a single XGMI island (≤8 GPUs) using TP only. Use PP when scaling beyond eight GPUs or across nodes. .. _data-parallelism-section: Data parallelism ^^^^^^^^^^^^^^^^ Data parallelism replicates model weights across separate instances/GPUs to process independent batches of requests. This approach increases throughput by distributing the workload across multiple replicas. **Use data parallelism when:** * Model fits on one GPU, but you need higher request throughput * Scaling across multiple nodes horizontally * Combining with tensor parallelism (for example, DP=2 + TP=4 = 8 GPUs total) **Quick start - single-node:** .. code-block:: bash # Model fit in 1 GPU. Creates 2 model replicas (requires 2 GPUs) VLLM_ALL2ALL_BACKEND="allgather_reducescatter" vllm serve /path/to/model \ --data-parallel-size 2 \ --disable-nccl-for-dp-synchronization .. tip:: For ROCm, currently use ``VLLM_ALL2ALL_BACKEND="allgather_reducescatter"`` and ``--disable-nccl-for-dp-synchronization`` with data parallelism. Choosing a load balancing strategy """"""""""""""""""""""""""""""""""" vLLM supports two modes for routing requests to DP ranks: .. list-table:: :header-rows: 1 :widths: 30 35 35 * - - **Internal LB** (recommended) - **External LB** * - **HTTP endpoints** - 1 endpoint, vLLM routes internally - N endpoints, you provide external router * - **Single-node config** - ``--data-parallel-size N`` - ``--data-parallel-size N --data-parallel-rank 0..N-1`` + different ports * - **Multi-node config** - ``--data-parallel-size``, ``--data-parallel-size-local``, ``--data-parallel-address`` - ``--data-parallel-size N --data-parallel-rank 0..N-1`` + ``--data-parallel-address`` * - **Client view** - Single URL/port - Multiple URLs/ports * - **Load balancer** - Built-in (vLLM handles) - External (Nginx, Kong, K8s Service) * - **Coordination** - DP ranks sync via RPC (for MoE/MLA) - DP ranks sync via RPC (for MoE/MLA) * - **Best for** - Most deployments (simpler) - K8s/cloud environments with existing LB .. tip:: **Dense (non-MoE) models only:** You can run fully independent ``vllm serve`` instances without any DP flags, using your own load balancer. This avoids RPC coordination overhead entirely. For more technical details, see `vLLM Data Parallel Deployment `_ Data Parallel Attention (advanced) """""""""""""""""""""""""""""""""" For models with Multi-head Latent Attention (MLA) architecture like DeepSeek V2, V3, and R1, vLLM supports **Data Parallel Attention**, which provides request-level parallelism instead of model replication. This avoids KV cache duplication across tensor parallel ranks, significantly reducing memory usage and enabling larger batch sizes. **Key benefits for MLA models:** * Eliminates KV cache duplication when using tensor parallelism * Enables higher throughput for high-QPS serving scenarios * Better memory efficiency for large context windows **Usage with Expert Parallelism:** Data parallel attention works seamlessly with Expert Parallelism for MoE models: .. code-block:: bash # DeepSeek-R1 with DP attention and expert parallelism VLLM_ALL2ALL_BACKEND="allgather_reducescatter" vllm serve deepseek-ai/DeepSeek-R1 \ --data-parallel-size 8 \ --enable-expert-parallel \ --disable-nccl-for-dp-synchronization For more technical details, see `vLLM RFC #16037 `_. Expert parallelism ^^^^^^^^^^^^^^^^^^ Expert parallelism (EP) distributes expert layers of Mixture-of-Experts (MoE) models across multiple GPUs, where tokens are routed to the GPUs holding the experts they need. **Performance considerations:** Expert parallelism is designed primarily for cross-node MoE deployments where high-bandwidth interconnects (like InfiniBand) between nodes make EP communication efficient. For single-node Instinct MI300X/MI355X deployments with XGMI connectivity, tensor parallelism typically provides better performance due to optimized all-to-all collectives on XGMI. **When to use EP:** * Multi-node MoE deployments with fast inter-node networking * Models with very large numbers of experts that benefit from expert distribution * Workloads where EP's reduced data movement outweighs communication overhead **Single-node recommendation:** For Instinct MI300X/MI355X within a single node (≤8 GPUs), prefer tensor parallelism over expert parallelism for MoE models to leverage XGMI's high bandwidth and low latency. **Basic usage:** .. code-block:: bash # Enable expert parallelism for MoE models (DeepSeek example with 8 GPUs) vllm serve deepseek-ai/DeepSeek-R1 \ --tensor-parallel-size 8 \ --enable-expert-parallel **Combining with Tensor Parallelism:** When EP is enabled alongside tensor parallelism: * Fused MoE layers use expert parallelism * Non-fused MoE layers use tensor parallelism **Combining with Data Parallelism:** EP works seamlessly with Data Parallel Attention for optimal memory efficiency in MLA+MoE models (for example, DeepSeek V3): .. code-block:: bash # DP attention + EP for DeepSeek-R1 VLLM_ALL2ALL_BACKEND="allgather_reducescatter" vllm serve deepseek-ai/DeepSeek-R1 \ --data-parallel-size 8 \ --enable-expert-parallel \ --disable-nccl-for-dp-synchronization Throughput benchmarking ======================= This guide evaluates LLM inference by tokens per second (TPS). vLLM provides a built-in benchmark: .. code-block:: bash # Synthetic or dataset-driven benchmark vllm bench throughput --model /path/to/model [other args] * **Real-world dataset** (ShareGPT) example: .. code-block:: bash wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json vllm bench throughput --model /path/to/model --dataset /path/to/ShareGPT_V3_unfiltered_cleaned_split.json * **Synthetic**: set fixed ``--input-len`` and ``--output-len`` for reproducible runs. .. tip:: **Profiling checklist (ROCm)** 1. Fix your prompt distribution (ISL/OSL) and **vary one knob at a time** (graph mode, MBT). 2. Measure **TTFT**, **ITL**, and **TPS** together; don't optimize one in isolation. 3. Compare graph modes: **PIECEWISE** (balanced) vs **FULL**/``FULL_DECODE_ONLY`` (max throughput). 4. Sweep ``--max-num-batched-tokens`` around **8k–64k** to find your latency/throughput balance. Maximizing instances per node ============================= To maximize **per-node throughput**, run as many vLLM instances as model memory allows, balancing KV-cache capacity. * **HBM capacities**: MI300X = 192 GB HBM3; MI355X = 288 GB HBM3E. * Up to **eight** single-GPU vLLM instances can run in parallel on an 8×GPU node (one per GPU): .. code-block:: bash for i in $(seq 0 7); do CUDA_VISIBLE_DEVICES="$i" vllm bench throughput -tp 1 --model /path/to/model --dataset /path/to/ShareGPT_V3_unfiltered_cleaned_split.json & done Total throughput from **N** single-GPU instances usually exceeds one instance stretched across **N** GPUs (``-tp N``). **Model coverage**: Llama 2 (7B/13B/70B), Llama 3 (8B/70B), Qwen2 (7B/72B), Mixtral-8x7B/8x22B, and others Llama2‑70B and Llama3‑70B can fit a single MI300X/MI355X; Llama3.1‑405B fits on a single 8×MI300X/MI355X node. Configure the gpu-memory-utilization parameter ================================================== The ``--gpu-memory-utilization`` parameter controls the fraction of GPU memory reserved for the KV-cache. The default is **0.9** (90%). There are two strategies: 1. **Increase** ``--gpu-memory-utilization`` to maximize throughput for a single instance (up to **0.95**). Example: .. code-block:: bash vllm serve meta-llama/Llama-3.3-70B-Instruct \ --gpu-memory-utilization 0.95 \ --max-model-len 8192 \ --port 8000 2. **Decrease** to pack **multiple** instances on the same GPU (for small models like 7B/8B), keeping KV-cache viable: .. code-block:: bash # Instance 1 on GPU 0 CUDA_VISIBLE_DEVICES=0 vllm serve meta-llama/Llama-3.1-8B-Instruct \ --gpu-memory-utilization 0.45 \ --max-model-len 4096 \ --port 8000 # Instance 2 on GPU 0 CUDA_VISIBLE_DEVICES=0 vllm serve meta-llama/Llama-Guard-3-8B \ --gpu-memory-utilization 0.45 \ --max-model-len 4096 \ --port 8001 vLLM engine arguments ===================== Selected arguments that often help on ROCm. See `Engine Arguments `__ in the vLLM documentation for the full list. Configure --max-num-seqs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The default value is **1024** in vLLM V1 (increased from **256** in V0). This flag controls the maximum number of sequences processed per batch, directly affecting concurrency and memory usage. * **To increase throughput**: Raise to **2048** or **4096** if memory allows, enabling more sequences per iteration. * **To reduce memory usage**: Lower to **256** or **128** for large models or long-context generation. For example, set ``--max-num-seqs 128`` to reduce concurrency and lower memory requirements. In vLLM V1, KV-cache token requirements are computed as ``max-num-seqs * max-model-len``. Example usage: .. code-block:: bash vllm serve --max-num-seqs 128 --max-model-len 8192 Configure --max-num-batched-tokens ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **Chunked prefill is enabled by default** in vLLM V1. * Lower values improve **ITL** (less prefill interrupting decode). * Higher values improve **TTFT** (more prefill per batch). Defaults: **8192** for online serving, **16384** for offline. However, optimal values vary significantly by model size. Smaller models can efficiently handle larger batch sizes. Setting it near ``--max-model-len`` mimics V0 behavior and often maximizes throughput. **Guidance:** * **Interactive (low TTFT)**: keep MBT ≤ **8k–16k**. * **Streaming (low ITL)**: MBT **16k–32k**. * **Offline max throughput**: MBT **≥32k** (diminishing TPS returns beyond ~32k). **Pattern:** Smaller/more efficient models benefit from larger batch sizes. MoE models with expert parallelism can handle very large batches efficiently. **Rule of thumb** * Push MBT **up** to trade TTFT↑ for ITL↓ and slightly higher TPS. * Pull MBT **down** to trade ITL↑ for TTFT↓ (interactive UX). Async scheduling ^^^^^^^^^^^^^^^^ ``--async-scheduling`` (replaces deprecated ``num_scheduler_steps``) can improve throughput/ITL by trading off TTFT. Prefer **off** for latency-sensitive serving; **on** for offline batch throughput. CUDA graphs configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^ CUDA graphs reduce kernel launch overhead by capturing and replaying GPU operations, improving inference throughput. Configure using ``--compilation-config '{"cudagraph_mode": "MODE"}'``. **Available modes:** * ``NONE`` — CUDA graphs disabled (debugging) * ``PIECEWISE`` — Attention stays eager, other ops use CUDA graphs (most compatible) * ``FULL`` — Full CUDA graphs for all batches (best for small models/prompts) * ``FULL_DECODE_ONLY`` — Full CUDA graphs only for decode (saves memory in prefill/decode split setups) * ``FULL_AND_PIECEWISE`` — **(default)** Full graphs for decode + piecewise for prefill (best performance, highest memory) **Default behavior:** V1 defaults to ``FULL_AND_PIECEWISE`` with piecewise compilation enabled; otherwise ``NONE``. **Backend compatibility:** Not all attention backends support all CUDA graph modes. Choose a mode your backend supports: .. list-table:: :header-rows: 1 :widths: 40 60 * - Attention backend - CUDA graph support * - vLLM/AITER Triton Unified Attention, vLLM Prefill-Decode Attention - Full support (prefill + decode) * - AITER MHA, AITER MLA - Uniform batches only * - vLLM Triton MLA - Must exclude attention from graph — ``PIECEWISE`` required **Usage examples:** .. code-block:: bash # Default (best performance, highest memory) vllm serve meta-llama/Llama-3.1-8B-Instruct # Decode-only graphs (lower memory, good for P/D split) vllm serve meta-llama/Llama-3.1-8B-Instruct \ --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' # Full graphs for offline throughput (small models) vllm serve meta-llama/Llama-3.1-8B-Instruct \ --compilation-config '{"cudagraph_mode": "FULL"}' **Migration from legacy flags:** * ``use_cudagraph=False`` → ``NONE`` * ``use_cudagraph=True, full_cuda_graph=False`` → ``PIECEWISE`` * ``full_cuda_graph=True`` → ``FULL`` (with automatic fallback) Quantization support ==================== vLLM supports FP4/FP8 (4-bit/8-bit floating point) weight and activation quantization using hardware acceleration on the Instinct MI300X and MI355X. Quantization of models with FP4/FP8 allows for a **2x-4x** reduction in model memory requirements and up to a **1.6x** improvement in throughput with minimal impact on accuracy. vLLM ROCm supports a variety of quantization demands: * On-the-fly quantization * Pre-quantized model through Quark and llm-compressor Supported quantization methods ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vLLM on ROCm supports the following quantization methods for the AMD Instinct MI300 series and Instinct MI355X GPUs: .. list-table:: :header-rows: 1 :widths: 20 15 15 20 30 * - Method - Precision - ROCm support - Memory reduction - Best use case * - **FP8** (W8A8) - 8-bit float - Excellent - 2× (50%) - Production, balanced speed/accuracy * - **PTPC-FP8** - 8-bit float - Excellent - 2× (50%) - High throughput, better than ``FP8`` * - **AWQ** - 4-bit int (W4A16) - Good - 4× (75%) - Large models, memory-constrained * - **GPTQ** - 4-bit/8-bit int - Good - 2-4× (50-75%) - Pre-quantized models available * - **FP8 KV-cache** - 8-bit float - Excellent - KV cache: 50% - All inference workloads * - **Quark (AMD)** - ``FP8``/``MXFP4`` - Optimized - 2-4× (50-75%) - AMD pre-quantized models * - **compressed-tensors** - W8A8 ``INT8``/``FP8`` - Good - 2× (50%) - LLM Compressor models **ROCm support key:** - Excellent: Fully supported with optimized kernels - Good: Supported, might not have AMD-optimized kernels - Optimized: AMD-specific optimizations available Using Pre-quantized Models ^^^^^^^^^^^^^^^^^^^^^^^^^^^ AMD provides pre-quantized models optimized for ROCm. These models are ready to use with vLLM. **AMD Quark-quantized models**: Available on `Hugging Face `_: * `Llama‑3.1‑8B‑Instruct‑FP8‑KV `__ (FP8 W8A8) * `Llama‑3.1‑70B‑Instruct‑FP8‑KV `__ (FP8 W8A8) * `Llama‑3.1‑405B‑Instruct‑FP8‑KV `__ (FP8 W8A8) * `Mixtral‑8x7B‑Instruct‑v0.1‑FP8‑KV `__ (FP8 W8A8) * `Mixtral‑8x22B‑Instruct‑v0.1‑FP8‑KV `__ (FP8 W8A8) * `Llama-3.3-70B-Instruct-MXFP4-Preview `__ (MXFP4 for MI350/MI355) * `Llama-3.1-405B-Instruct-MXFP4-Preview `__ (MXFP4 for MI350/MI355) * `DeepSeek-R1-0528-MXFP4-Preview `__ (MXFP4 for MI350/MI355) **Quick start**: .. code-block:: bash # FP8 W8A8 Quark model vllm serve amd/Llama-3.1-8B-Instruct-FP8-KV \ --dtype auto # MXFP4 Quark model for MI350/MI355 vllm serve amd/Llama-3.3-70B-Instruct-MXFP4-Preview \ --dtype auto \ --tensor-parallel-size 1 **Other pre-quantized models**: - AWQ models: `Hugging Face awq flag `_ - GPTQ models: `Hugging Face gptq flag `_ - LLM Compressor models: `Hugging Face compressed-tensors flag `_ On-the-fly quantization ^^^^^^^^^^^^^^^^^^^^^^^^ For models without pre-quantization, vLLM can quantize ``FP16``/``BF16`` models at server startup. **Supported methods**: - ``fp8``: Per-tensor ``FP8`` weight and activation quantization - ``ptpc_fp8``: Per-token-activation per-channel-weight ``FP8`` (better accuracy same ``FP8`` speed). See `PTPC-FP8 on ROCm blog post `_ for details **Usage:** .. code-block:: bash # On-the-fly FP8 quantization vllm serve meta-llama/Llama-3.1-8B-Instruct \ --quantization fp8 \ --dtype auto # On-the-fly PTPC-FP8 (recommended as default) vllm serve meta-llama/Llama-3.1-70B-Instruct \ --quantization ptpc_fp8 \ --dtype auto \ --tensor-parallel-size 4 .. note:: On-the-fly quantization adds two to five minutes of startup time but eliminates pre-quantization. For production with frequent restarts, use pre-quantized models. GPTQ ^^^^ GPTQ is a 4-bit/8-bit weight quantization method that compresses models with minimal accuracy loss. GPTQ is fully supported on ROCm via HIP-compiled kernels in vLLM. **ROCm support status**: - **Fully supported** - GPTQ kernels compile and run on ROCm via HIP - **Pre-quantized models work** with standard GPTQ kernels **Recommendation**: For the AMD Instinct MI300X, **AWQ with Triton kernels** or **FP8 quantization** might provide better performance due to ROCm-specific optimizations, but GPTQ is a viable alternative. **Using pre-quantized GPTQ models**: .. code-block:: bash # Using pre-quantized GPTQ model on ROCm vllm serve RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16 \ --quantization gptq \ --dtype auto \ --tensor-parallel-size 1 **Important notes**: - **Kernel support:** GPTQ uses standard HIP-compiled kernels on ROCm - **Performance:** AWQ with Triton kernels might offer better throughput on AMD GPUs due to ROCm optimizations - **Compatibility:** GPTQ models from Hugging Face work on ROCm with standard performance - **Use case:** GPTQ is suitable when pre-quantized GPTQ models are readily available AWQ (Activation-aware Weight Quantization) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AWQ (Activation-aware Weight Quantization) is a 4-bit weight quantization technique that provides excellent model compression with minimal accuracy loss (<1%). ROCm supports AWQ quantization on the AMD Instinct MI300 series and MI355X GPUs with vLLM. **Using pre-quantized AWQ models:** Many AWQ-quantized models are available on Hugging Face. Use them directly with vLLM: .. code-block:: bash # vLLM serve with AWQ model VLLM_USE_TRITON_AWQ=1 \ vllm serve hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 \ --quantization awq \ --tensor-parallel-size 1 \ --dtype auto **Important Notes:** * **ROCm requirement:** Set ``VLLM_USE_TRITON_AWQ=1`` to enable Triton-based AWQ kernels on ROCm * **dtype parameter:** AWQ requires ``--dtype auto`` or ``--dtype float16``. The ``--dtype`` flag controls the **activation dtype** (``FP16``/``BF16`` for computations), not the weight dtype. AWQ weights remain as INT4 (4-bit integers) as specified in the model's quantization config, but are dequantized to ``FP16``/``BF16`` during matrix multiplication operations. * **Group size:** 128 is recommended for optimal performance/accuracy balance * **Model compatibility:** AWQ is primarily tested on Llama, Mistral, and Qwen model families Quark (AMD quantization toolkit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AMD Quark is the AMD quantization toolkit optimized for ROCm. It supports ``FP8 W8A8``, ``MXFP4``, ``W8A8 INT8``, and other quantization formats with native vLLM integration. The quantization format will automatically be inferred from the model config file, so you can omit ``--quantization quark``. **Running Quark Models:** .. code-block:: bash # FP8 W8A8: Single GPU vllm serve amd/Llama-3.1-8B-Instruct-FP8-KV \ --dtype auto \ --max-model-len 8192 \ --gpu-memory-utilization 0.90 # MXFP4: Extreme memory efficiency vllm serve amd/Llama-3.3-70B-Instruct-MXFP4-Preview \ --dtype auto \ --tensor-parallel-size 1 \ --max-model-len 8192 **Key features:** - **FP8 models**: ~50% memory reduction, 2× compression - **MXFP4 models**: ~75% memory reduction, 4× compression - **Embedded scales**: Quark FP8-KV models include pre-calibrated KV-cache scales - **Hardware optimized**: Leverages the AMD Instinct MI300 series ``FP8`` acceleration For creating your own Quark-quantized models, see `Quark Documentation `_. FP8 kv-cache dtype ^^^^^^^^^^^^^^^^^^^^ FP8 KV-cache quantization reduces memory footprint by approximately 50%, enabling longer context lengths or higher concurrency. ROCm supports FP8 KV-cache with both ``fp8_e4m3`` and ``fp8_e5m2`` formats on AMD Instinct MI300 series and other CDNA™ GPUs. Use ``--kv-cache-dtype fp8`` to enable ``FP8`` KV-cache quantization. For best accuracy, use calibrated scaling factors generated via `LLM Compressor `_. Without calibration, scales are calculated dynamically (``--calculate-kv-scales``) with minimal accuracy impact. **Quick start (dynamic scaling)**: .. code-block:: bash # vLLM serve with dynamic FP8 KV-cache vllm serve meta-llama/Llama-3.1-8B-Instruct \ --kv-cache-dtype fp8 \ --calculate-kv-scales \ --gpu-memory-utilization 0.90 **Calibrated scaling (advanced)**: For optimal accuracy, pre-calibrate KV-cache scales using representative data. The calibration process: #. Runs the model on calibration data (512+ samples recommended) #. Computes optimal ``FP8`` quantization scales for key/value cache tensors #. Embeds these scales into the saved model as additional parameters #. vLLM loads the model and uses the embedded scales automatically when ``--kv-cache-dtype fp8`` is specified The quantized model can be used like any other model. The embedded scales are stored as part of the model weights. **Using pre-calibrated models:** AMD provides ready-to-use models with pre-calibrated ``FP8`` KV cache scales: * `amd/Llama-3.1-8B-Instruct-FP8-KV `_ * `amd/Llama-3.3-70B-Instruct-FP8-KV `_ To verify a model has pre-calibrated KV cache scales, check ``config.json`` for: .. code-block:: json "quantization_config": { "kv_cache_scheme": "static" // Indicates pre-calibrated scales are embedded } **Creating your own calibrated model:** .. code-block:: bash # 1. Install LLM Compressor pip install llmcompressor # 2. Run calibration script (see llm-compressor repo for full example) python llama3_fp8_kv_example.py # 3. Use calibrated model in vLLM vllm serve ./Meta-Llama-3-8B-Instruct-FP8-KV \ --kv-cache-dtype fp8 For detailed instructions and the complete calibration script, see the `FP8 KV Cache Quantization Guide `_. **Format options**: - ``fp8`` or ``fp8_e4m3``: Higher precision (default, recommended) - ``fp8_e5m2``: Larger dynamic range, slightly lower precision Speculative decoding (experimental) =================================== Recent vLLM versions add support for speculative decoding backends (for example, Eagle‑v3). Evaluate for your model and latency/throughput goals. Speculative decoding is a technique to reduce latency when max number of concurrency is low. Depending on the methods, the effective concurrency varies, for example, from 16 to 64. Example command: .. code-block:: bash vllm serve meta-llama/Llama-3.1-8B-Instruct \ --trust-remote-code \ --swap-space 16 \ --disable-log-requests \ --tensor-parallel-size 1 \ --distributed-executor-backend mp \ --dtype float16 \ --quantization fp8 \ --kv-cache-dtype fp8 \ --no-enable-chunked-prefill \ --max-num-seqs 300 \ --max-num-batched-tokens 131072 \ --gpu-memory-utilization 0.8 \ --speculative_config '{"method": "eagle3", "model": "yuhuili/EAGLE3-LLaMA3.1-Instruct-8B", "num_speculative_tokens": 2, "draft_tensor_parallel_size": 1, "dtype": "float16"}' \ --port 8001 .. important:: It has been observed that more ``num_speculative_tokens`` causes less acceptance rate of draft model tokens and a decline in throughput. As a workaround, set ``num_speculative_tokens`` to <= 2. Multi-node checklist and troubleshooting ======================================== 1. Use ``--distributed-executor-backend ray`` across nodes to manage HIP-visible ranks and RCCL communicators. (``ray`` is the default for multi-node. Explicitly setting this flag is optional.) 2. Ensure ``/dev/shm`` is shared across ranks (Docker ``--shm-size``, Kubernetes ``emptyDir``), as RCCL uses shared memory for rendezvous. 3. For GPUDirect RDMA, set ``RCCL_NET_GDR_LEVEL=2`` and verify links (``ibstat``). Requires supported NICs (for example, ConnectX‑6+). 4. Collect RCCL logs: ``RCCL_DEBUG=INFO`` and optionally ``RCCL_DEBUG_SUBSYS=INIT,GRAPH`` for init/graph stalls. Further reading =============== * :doc:`workload` * :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm` --- .. meta:: :description: Learn about workload tuning on AMD Instinct MI300X GPUs for optimal performance. :keywords: AMD, Instinct, MI300X, HPC, tuning, BIOS settings, NBIO, ROCm, environment variable, performance, HIP, Triton, PyTorch TunableOp, vLLM, RCCL, MIOpen, GPU, resource utilization ***************************************** AMD Instinct MI300X workload optimization ***************************************** This document provides guidelines for optimizing the performance of AMD Instinct™ MI300X GPUs, with a particular focus on GPU kernel programming, high-performance computing (HPC), and deep learning operations using PyTorch. It delves into specific workloads such as :ref:`model inference `, offering strategies to enhance efficiency. The following topics highlight :ref:`auto-tunable configurations ` as well as :ref:`Triton kernel optimization ` for meticulous tuning. Workload tuning strategy ======================== By following a structured approach, you can systematically address performance issues and enhance the efficiency of your workloads on AMD Instinct MI300X GPUs. Measure the current workload ---------------------------- Begin by evaluating the performance of your workload in its current state. This involves running benchmarks and collecting performance data to establish a baseline. Understanding how your workload behaves under different conditions provides critical insights into where improvements are needed. .. _mi300x-profiling-start: Identify tuning requirements ---------------------------- Analyze the collected performance data to identify areas where tuning is required. This could involve detecting bottlenecks in CPU, GPU, memory, or data transfer. Understanding these requirements will help direct your optimization efforts more effectively. Profiling is a fundamental step in workload tuning. It allows you to gather detailed information about how your workload utilizes system resources, and where potential inefficiencies lie. Profiling tools can provide insights into both high-level and granular performance metrics. See :ref:`mi300x-profiling-tools`. High-level profiling tools ^^^^^^^^^^^^^^^^^^^^^^^^^^ For a broad overview, use tools like the :ref:`PyTorch Profiler `, which helps in understanding how PyTorch operations are executed and where time is spent. This is particularly useful for developers new to workload tuning, as it provides a comprehensive view without requiring in-depth knowledge of lower-level operations. Kernel-level profiling tools ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When profiling indicates that GPUs are a performance bottleneck, delve deeper into kernel-level profiling. Tools such as the :ref:`ROCr Debug Agent `, :ref:`ROCProfiler `, and :ref:`ROCm Compute Profiler ` offer detailed insights into GPU kernel execution. These tools can help isolate problematic GPU operations and provide data needed for targeted optimizations. Analyze and tune ---------------- Based on the insights gained from profiling, focus your tuning efforts on the identified bottlenecks. This might involve optimizing specific kernel operations, adjusting memory access patterns, or modifying computational algorithms. The following subsections discuss optimization ranging from high-level and more automated strategies to more involved, hands-on optimization. Optimize model inference with vLLM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vLLM provides tools and techniques specifically designed for efficient model inference on AMD Instinct GPUs. See the official `vLLM installation docs `__ for installation guidance. Optimizing performance with vLLM involves configuring tensor parallelism, leveraging advanced features, and ensuring efficient execution. * Configuration for vLLM: Set engine arguments according to workload requirements. * Benchmarking and performance metrics: Measure latency and throughput to evaluate performance. .. seealso:: See :doc:`vllm-optimization` to learn more about vLLM performance optimization techniques. .. _mi300x-auto-tune: Auto-tunable configurations ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Auto-tunable configurations can significantly streamline performance optimization by automatically adjusting parameters based on workload characteristics. For example: * PyTorch: Utilize :ref:`PyTorch’s built-in auto-tuning features `, such as the :ref:`TunableOp ` module, which helps in optimizing operation performance by exploring different configurations. * MIOpen: Leverage :ref:`MIOpen’s auto-tuning capabilities ` for convolutional operations and other primitives to find optimal settings for your specific hardware. * Triton: Use :ref:`Triton’s auto-tuning features ` to explore various kernel configurations and select the best-performing ones. Manual tuning ^^^^^^^^^^^^^ Advanced developers can manually adjust parameters and configurations to optimize performance. Both Triton and HIP involve manual tuning aspects. * ROCm libraries: Optimize GPU performance by adjusting various parameters and configurations within :ref:`ROCm libraries `. This approach involves hands-on optimization to maximize efficiency for specific workloads. * Triton: Tune Triton kernels by adjusting parameters tailored to your workload to :ref:`optimize GPU resource utilization ` and better :ref:`leverage specific hardware features `. * HIP: Profile and :ref:`optimize HIP kernels ` by optimizing parallel execution, memory access patterns, and other aspects. Iterate and validate -------------------- Optimization is an iterative process. After applying tuning changes, re-profile the workload to validate improvements and ensure that the changes have had the desired effect. Continuous iteration helps refine the performance gains and address any new bottlenecks that may emerge. ROCm provides a prebuilt optimized Docker image that has everything required to implement the LLM inference tips in this section. It includes ROCm, PyTorch, and vLLM. For more information, see :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm`. .. _mi300x-profiling-tools: Profiling tools =============== AMD profiling tools provide valuable insights into how efficiently your application utilizes hardware and help diagnose potential bottlenecks that contribute to poor performance. Developers targeting AMD GPUs have multiple tools available depending on their specific profiling needs. * ROCProfiler tool collects kernel execution performance metrics. For more information, see the :doc:`ROCProfiler ` documentation. * ROCm Compute Profiler builds upon ROCProfiler but provides more guided analysis. For more information, see :doc:`ROCm Compute Profiler documentation `. Refer to :doc:`profiling-and-debugging` to explore commonly used profiling tools and their usage patterns. Once performance bottlenecks are identified, you can implement an informed workload tuning strategy. If kernels are the bottleneck, consider: * :ref:`Auto-tuning in PyTorch with TunableOp ` * :ref:`Auto-tuning in MIOpen ` * :ref:`Triton auto-tunable kernel configurations ` If auto-tuning does not meet your requirements, consider :ref:`mi300x-triton-kernel-performance-optimization`. If the issue is multi-GPU scale-out, try :ref:`RCCL tuning and configuration `. This section discusses profiling and debugging tools and some of their common usage patterns with ROCm applications. .. _mi300x-pytorch-profiler: PyTorch Profiler ---------------- `PyTorch Profiler `_ can be invoked inside Python scripts, letting you collect CPU and GPU performance metrics while the script is running. See the `PyTorch Profiler tutorial `_ for more information. You can then visualize and view these metrics using an open-source profile visualization tool like `Perfetto UI `_. #. Use the following snippet to invoke PyTorch Profiler in your code. .. code-block:: python import torch import torchvision.models as models from torch.profiler import profile, record_function, ProfilerActivity model = models.resnet18().cuda() inputs = torch.randn(2000, 3, 224, 224).cuda() with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof: with record_function("model_inference"): model(inputs) prof.export_chrome_trace("resnet18_profile.json") #. Profile results in ``resnet18_profile.json`` can be viewed by the Perfetto visualization tool. Go to ``__ and import the file. In your Perfetto visualization, you'll see that the upper section shows transactions denoting the CPU activities that launch GPU kernels while the lower section shows the actual GPU activities where it processes the ``resnet18`` inferences layer by layer. .. figure:: ../../../data/how-to/tuning-guides/perfetto-trace.svg :width: 800 Perfetto trace visualization example. ROCm profiling tools -------------------- Heterogenous systems, where programs run on both CPUs and GPUs, introduce additional complexities. Understanding the critical path and kernel execution is all the more important. So, performance tuning is a necessary component in the benchmarking process. With AMD's profiling tools, developers are able to gain important insight into how efficiently their application is using hardware resources and effectively diagnose potential bottlenecks contributing to poor performance. Developers working with AMD Instinct GPUs have multiple tools depending on their specific profiling needs; these include: * :ref:`ROCProfiler ` * :ref:`ROCm Compute Profiler ` * :ref:`ROCm Systems Profiler ` .. _mi300x-rocprof: ROCProfiler ^^^^^^^^^^^ :doc:`ROCProfiler ` is primarily a low-level API for accessing and extracting GPU hardware performance metrics, commonly called *performance counters*. These counters quantify the performance of the underlying architecture showcasing which pieces of the computational pipeline and memory hierarchy are being utilized. Your ROCm installation contains a script or executable command called ``rocprof`` which provides the ability to list all available hardware counters for your specific GPU, and run applications while collecting counters during their execution. This ``rocprof`` utility also depends on the :doc:`ROCTracer and ROC-TX libraries `, giving it the ability to collect timeline traces of the GPU software stack as well as user-annotated code regions. .. note:: ``rocprof`` is a CLI-only utility where inputs and outputs take the form of text and CSV files. These formats provide a raw view of the data and puts the onus on the user to parse and analyze. ``rocprof`` gives the user full access and control of raw performance profiling data, but requires extra effort to analyze the collected data. .. _mi300x-rocprof-compute: ROCm Compute Profiler ^^^^^^^^^^^^^^^^^^^^^ :doc:`ROCm Compute Profiler ` is a system performance profiler for high-performance computing (HPC) and machine learning (ML) workloads using Instinct GPUs. Under the hood, ROCm Compute Profiler uses :ref:`ROCProfiler ` to collect hardware performance counters. The ROCm Compute Profiler tool performs system profiling based on all approved hardware counters for Instinct GPU architectures. It provides high level performance analysis features including System Speed-of-Light, IP block Speed-of-Light, Memory Chart Analysis, Roofline Analysis, Baseline Comparisons, and more. ROCm Compute Profiler takes the guesswork out of profiling by removing the need to provide text input files with lists of counters to collect and analyze raw CSV output files as is the case with ROCProfiler. Instead, ROCm Compute Profiler automates the collection of all available hardware counters in one command and provides graphical interfaces to help users understand and analyze bottlenecks and stressors for their computational workloads on AMD Instinct GPUs. .. note:: ROCm Compute Profiler collects hardware counters in multiple passes, and will therefore re-run the application during each pass to collect different sets of metrics. .. figure:: ../../../data/how-to/tuning-guides/rocprof-compute-analysis.png :width: 800 ROCm Compute Profiler memory chart analysis panel. In brief, ROCm Compute Profiler provides details about hardware activity for a particular GPU kernel. It also supports both a web-based GUI or command-line analyzer, depending on your preference. .. _mi300x-rocprof-systems: ROCm Systems Profiler ^^^^^^^^^^^^^^^^^^^^^ :doc:`ROCm Systems Profiler ` is a comprehensive profiling and tracing tool for parallel applications, including HPC and ML packages, written in C, C++, Fortran, HIP, OpenCL, and Python which execute on the CPU or CPU and GPU. It is capable of gathering the performance information of functions through any combination of binary instrumentation, call-stack sampling, user-defined regions, and Python interpreter hooks. ROCm Systems Profiler supports interactive visualization of comprehensive traces in the web browser in addition to high-level summary profiles with ``mean/min/max/stddev`` statistics. Beyond runtime information, ROCm Systems Profiler supports the collection of system-level metrics such as CPU frequency, GPU temperature, and GPU utilization. Process and thread level metrics such as memory usage, page faults, context switches, and numerous other hardware counters are also included. .. tip:: When analyzing the performance of an application, it is best not to assume you know where the performance bottlenecks are and why they are happening. ROCm Systems Profiler is the ideal tool for characterizing where optimization would have the greatest impact on the end-to-end execution of the application and to discover what else is happening on the system during a performance bottleneck. .. figure:: ../../../data/how-to/tuning-guides/rocprof-systems-timeline.png :width: 800 ROCm Systems Profiler timeline trace example. vLLM performance optimization ============================= vLLM is a high-throughput and memory efficient inference and serving engine for large language models that has gained traction in the AI community for its performance and ease of use. See :doc:`vllm-optimization`, where you'll learn how to: * Enable AITER (AI Tensor Engine for ROCm) to speed up on LLM models. * Configure environment variables for optimal HIP, RCCL, and Quick Reduce performance. * Select the right attention backend for your workload (AITER MHA/MLA vs. Triton). * Choose parallelism strategies (tensor, pipeline, data, expert) for multi-GPU deployments. * Apply quantization (``FP8``/``FP4``) to reduce memory usage by 2-4× with minimal accuracy loss. * Tune engine arguments (batch size, memory utilization, graph modes) for your use case. * Benchmark and scale across single-node and multi-node configurations. .. _mi300x-tunableop: PyTorch TunableOp ================== `TunableOp `_ is a feature used to obtain the optimal GPU kernel for a key PyTorch operations. At the moment, TunableOp supports the tuning of dense matrix multiplies (GEMM, batched GEMM, GEMM and bias, and scaled GEMM). This feature is useful for squeezing out the last bit of performance. In short, it will try up to thousands of matrix multiply algorithms that are available in rocBLAS and hipBLASLt. A caveat is that as the math libraries improve over time, there is a less benefit to using TunableOp, and there is also no guarantee that the workload being tuned will be able to outperform the default GEMM algorithm in hipBLASLt. Some additional references for PyTorch TunableOp include `ROCm blog `__, TunableOp `README `__, and `llm tuning `__. The three most important environment variables for controlling TunableOp are: ``PYTORCH_TUNABLEOP_ENABLED`` The main on/off switch for all TunableOp implementations. Default is ``0`` (disabled). Set to ``1`` to enable. ``PYTORCH_TUNABLEOP_TUNING`` When enabled, if a tuned entry isn't found, runs the tuning step and records the entry. Default is ``1`` (enabled). Set to ``0`` to disable. ``PYTORCH_TUNABLEOP_VERBOSE`` Enables verbose output for debugging purposes -- it can be useful to see if TunableOp is being used at all. Default is ``0`` (disabled). Set to ``1`` to enable. For the complete list of environment variables, see the TunableOp `README `__. There are also Python APIs to set some of these environment variables, but the preferred way to set the TunableOp tuning parameters is to use the environment variables. Workflow -------- Use these environment variables to enable TunableOp for any applications or libraries that use PyTorch (2.3 or later). The first step is the tuning pass: 1. Enable TunableOp and tuning. Optionally enable verbose mode: .. code-block:: shell PYTORCH_TUNABLEOP_ENABLED=1 PYTORCH_TUNABLEOP_VERBOSE=1 your_script.sh This pass can be very slow. The output will be the ``tunableop_results.csv`` file containing a list of GEMMs encountered and the optimal GPU kernel that was identified. Multi-GPU tuning is supported, producing a separate tunableop_results.csv file for each GPU. The tuning algorithm executes independently on each GPU, with each tuning process sandboxed to its respective GPU. There is no inter-GPU communication during tuning. For data-parallel algorithms, where GEMM configurations across GPUs are typically identical, this approach can result in redundant work. In such cases, running the workload on a single GPU might suffice. However, for algorithms involving multiple levels of parallelism (as in data parallelism combined with ML model parallelism), different GPUs might require distinct GEMM parameters. In these scenarios, a multi-GPU configuration is recommended. In the second step, we re-run the workload with optimal configuration using the ``tunableop_results.csv`` file obtained in step 1. 2. Enable TunableOp, disable tuning, and measure: .. code-block:: shell PYTORCH_TUNABLEOP_ENABLED=1 PYTORCH_TUNABLEOP_TUNING=0 your_script.sh Compare the wall-clock time from this second step to your reference wall-clock time with TunableOp completely disabled (``PYTORCH_TUNABLEOP_ENABLED=0``). Offline tuning -------------- A new feature of TunableOp, offline tuning, is available in upstream PyTorch and supported in PyTorch 2.6 or later. Traditionally, tuning is performed in-place during workload execution. While convenient for one-off tuning, this approach can become cumbersome if frequent re-tuning is required -- such as when a new version of a math library is released. In these cases, re-running the workload and performing tuning repeatedly can be inefficient. Offline tuning addresses this challenge by decoupling the tuning process from workload execution. It enables the collection of GEMMs from a workload during a collection pass, followed by tuning these GEMMs in a separate tuning pass, without re-running the original workload. This approach significantly reduces compute resource requirements, particularly for time-intensive workloads. For workflow instructions, refer to the `Offline Tuning documentation `_. .. _mi300x-torchinductor-tuning: PyTorch inductor max-autotune tuning knobs ========================================== The following are suggestions for optimizing matrix multiplication (GEMM) and convolution (``conv``) operations in PyTorch using ``inductor``, a part of the PyTorch compilation framework. Learn more about TorchInductor environment variables and usage in the `PyTorch documentation `_. .. note:: Triton is not used if regular :doc:`MIOpen ` or :doc:`rocBLAS ` performs faster for a specific operation. .. note:: Experimental: TunableOp (see the :ref:`PyTorch TunableOp ` section) can also be used in combination with ``TorchInductor`` ``max-autotune`` mode to boost ATen GEMM performance but will further increase tuning time. The environment variable ``TORCHINDUCTOR_AUTOTUNE_MULTI_DEVICE=1`` can be useful in single GPU workloads to distribute Triton GEMM tuning. Triton backend -------------- The goal is to leverage Triton to achieve better performance. To tune Triton kernels with ``gemm`` and convolution ops (``conv``), use the ``torch.compile`` function with the ``max-autotune`` mode. This benchmarks a predefined list of Triton configurations and selects the fastest one for each shape. See the configurations in PyTorch source code: * `conv configurations for "max-autotune" `_ * `matmul configurations for "max-autotune" `_ This tuning will select the best Triton ``gemm`` configurations according to tile-size ``(BLOCK_M, BLOCK_N, BLOCK_K), num_stages, num_warps`` and ``mfma`` instruction size ( ``matrix_instr_nonkdim`` ) (see "Triton kernel optimization" section for more details). * Set ``torch._inductor.config.max_autotune = True`` or ``TORCHINDUCTOR_MAX_AUTOTUNE=1``. * Or, for more fine-grained control: ``torch._inductor.config.max_autotune_gemm = True`` To enable tuning or lowering of ``mm``/``conv``\s. ``torch._inductor.config.max_autotune.pointwise = True`` To enable tuning for ``pointwise``/``reduction`` ops. ``torch._inductor.max_autotune_gemm_backends`` or ``TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS`` Selects the candidate backends for ``mm`` auto-tuning. Defaults to ``TRITON,ATEN``. Limiting this to ``TRITON`` might improve performance by enabling more fused ``mm`` kernels instead of going to rocBLAS. * Inference can see large improvements on AMD GPUs by utilizing ``torch._inductor.config.freezing=True`` or the ``TORCHINDUCTOR_FREEZING=1`` variable, which in-lines weights as constants and enables constant folding optimizations. * Enabling ``inductor``’s cpp_wrapper might improve overhead. This generates C++ code which launches Triton binaries directly with ``hipModuleLaunchKernel`` and relies on `hipification`. ``torch._inductor.config.cpp_wrapper=True`` or ``TORCHINDUCTOR_CPP_WRAPPER=1`` * Convolution workloads might see a performance benefit by specifying ``torch._inductor.config.layout_optimization=True`` or ``TORCHINDUCTOR_LAYOUT_OPTIMIZATION=1``. This can help performance by enforcing ``channel_last`` memory format on the convolution in TorchInductor, avoiding any unnecessary transpose operations. Note that ``PYTORCH_MIOPEN_SUGGEST_NHWC=1`` is recommended if using this. * To extract the Triton kernels generated by ``inductor``, set the environment variable ``TORCH_COMPILE_DEBUG=1``, which will create a ``torch_compile_debug/`` directory in the current path. The wrapper codes generated by ``inductor`` are in one or more ``output_code.py`` files corresponding to the FX graphs associated with the model. The Triton kernels are defined in these generated codes. Composable Kernel backend -------------------------- You can enable the Composable Kernel (``CK``) backend by appending ``CK`` to the comma-separated list of backends. This allows the auto-tuning process to use kernels from the Composable Kernel library. ``torch._inductor.max_autotune_gemm_backends`` or ``TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS``. Install the Composable Kernel library's Python wrapper via pip using the following command: .. code-block:: shell pip install git+https://github.com/rocm/composable_kernel@develop This wrapper library is responsible for constructing a list of kernel instances available in the Composable Kernel library, as well as storing the kernel instance C++ includes in a known location (so clang can look into these paths when compiling the ``gemm`` auto-tune candidates). * ``matmul`` (with ``float16`` and ``bfloat16`` inputs, row-major X, row-major or column-major W) * ``addmm`` (with ``float16`` or ``bfloat16`` X, W and Bias; row-major X, row-major or column-major W; Bias can be broadcast either along row-major or column-major dimension) * ``scaled_mm`` (``float8_e4m3fnuz`` inputs, ``bfloat16`` output) * ``conv2d`` (with ``float32``, ``float16`` or ``bfloat16`` inputs, channels-last weight layout) * For working examples, see `test/inductor/test_ck_backend.py `_. * Compiling or build time can be configured by modifying ``torch._inductor.config`` to reduce the build time to avoid time-out. * ``compile_threads``: Number of threads used for compilation. Set it to the number of available CPU cores. * ``rocm.n_max_profiling_configs``: Limiting the number of kernels to speed up compilation. * Setting environment variable ``PYTORCH_MIOPEN_SUGGEST_NHWC=1`` to optimize convolution operations. Debugging and troubleshooting performance: * Generate a standalone executable runner to debug or assess kernels' performance by setting environment variable ``INDUCTOR_CK_BACKEND_GENERATE_TEST_RUNNER_CODE=1`` to facilitate debugging and profiling. By default, the CK backend will not build a standalone executable runner. * Enable debug by passing compilation flags (e.g., ``is_debug``) to clang when compiling the kernels in ``torch._inductor.config.rocm`` class. * The generated source files and other products of clang compilation are located in the torch inductor root directory (default: ``/tmp/torchinductor_root``) .. _mi300x-rocm-library-tuning: ROCm library tuning =================== ROCm library tuning involves optimizing the performance of routine computational operations (such as ``GEMM``) provided by ROCm libraries like :ref:`hipBLASLt `, :ref:`Composable Kernel `, :ref:`MIOpen `, and :ref:`RCCL `. This tuning aims to maximize efficiency and throughput on Instinct MI300X GPUs to gain improved application performance. .. _mi300x-library-gemm: GEMM (general matrix multiplication) ------------------------------------ GEMMs (General Matrix Multiplications) are a fundamental building block for many operations in neural networks. GEMM is defined as ``C = αAB + βC`` where A is an ``MxK`` matrix input and B is ``KxN`` matrix input, and C is ``MxN`` matrix input and is overwritten by the output. α and β are scalar inputs. hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library. .. _mi300x-hipblaslt: hipBLASLt benchmarking ^^^^^^^^^^^^^^^^^^^^^^ The GEMM library `hipBLASLt `_ provides a benchmark tool for its supported operations. Refer to the `documentation `_ for details. * Example 1: Benchmark mix fp8 GEMM .. code-block:: shell HIP_FORCE_DEV_KERNARG=1 hipblaslt-bench --alpha 1 --beta 0 -r f16_r \ --a_type f16_r --b_type f8_r --compute_type f32_f16_r \ --initialization trig_float --cold_iters 100 --iters 1000 --rotating 256 * Example 2: Benchmark forward epilogues and backward epilogues * ``HIPBLASLT_EPILOGUE_RELU: "--activation_type relu";`` * ``HIPBLASLT_EPILOGUE_BIAS: "--bias_vector";`` * ``HIPBLASLT_EPILOGUE_RELU_BIAS: "--activation_type relu --bias_vector";`` * ``HIPBLASLT_EPILOGUE_GELU: "--activation_type gelu";`` * ``HIPBLASLT_EPILOGUE_DGELU": --activation_type gelu --gradient";`` * ``HIPBLASLT_EPILOGUE_GELU_BIAS: "--activation_type gelu --bias_vector";`` * ``HIPBLASLT_EPILOGUE_GELU_AUX: "--activation_type gelu --use_e";`` * ``HIPBLASLT_EPILOGUE_GELU_AUX_BIAS: "--activation_type gelu --bias_vector --use_e";`` * ``HIPBLASLT_EPILOGUE_DGELU_BGRAD: "--activation_type gelu --bias_vector --gradient";`` * ``HIPBLASLT_EPILOGUE_BGRADA: "--bias_vector --gradient --bias_source a";`` * ``HIPBLASLT_EPILOGUE_BGRADB: "--bias_vector --gradient --bias_source b";`` hipBLASLt auto-tuning using hipblaslt-bench ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use the auto-tuning tool in hipBLASLt to get the best solution for a given problem size. Prerequisite '''''''''''' Build hipBLASLt. See the `hipBLASLt repository `_ to see detailed build instructions. Quick start ''''''''''' Create a working folder for the auto-tuning tool, for example, ``tuning/``. 1. Set the ``ProblemType``, ``TestConfig``, and ``TuningParameters`` in the YAML file. You can modify the template YAML file in ``hipblaslt/utilities``. .. figure:: ../../../data/how-to/tuning-guides/hipblaslt_yaml_template.png :align: center :alt: HipBLASLt auto-tuning yaml file template 2. Run the following command to start tuning. .. code-block:: shell # python3 hipblaslt/utilities/find_exact.py # Assume we're in folder tuning, the default root of the build folder of hipblaslt is hipblaslt/build/release python3 ../hipblaslt/utilities/find_exact.py tuning.yaml hipblaslt/build/release ./ Output '''''' The tool will create two output folders. The first one is the benchmark results, the second one is the generated equality kernels. If ``SplitK`` is used, the solution's ``GlobalSplitU`` will also change if the winner is using a different ``SplitK`` from the solution. The YAML files generated inside the folder ``1_LogicYaml`` are logic ones. These YAML files are just like those generated from TensileLite. .. figure:: ../../../data/how-to/tuning-guides/hipblaslt_auto_tuning_output_files.png :align: center :alt: HipBLASLt auto-tuning output folder A quick view of the config YAML ''''''''''''''''''''''''''''''' The tuning tool is a two-step tool. It first runs the benchmark, then it creates the equality YAML for the user. Note that this config YAML file is different from the config YAML used in TensileLite. * **Benchmarking** The first step is to run the benchmark, ``find_exact.py`` will run the benchmark with ``hipblaslt-bench``. For the default configurations, see the Python file. .. code-block:: python defaultBenchOptions = {"ProblemType": { "TransposeA": 0, "TransposeB": 0, "ComputeInputDataType": "s", "ComputeDataType": "s", "DataTypeC": "s", "DataTypeD": "s", "UseBias": False }, "TestConfig": { "ColdIter": 20, "Iter": 100, "AlgoMethod": "all", "RequestedSolutions": 2, # Only works in AlgoMethod heuristic "SolutionIndex": None, # Only works in AlgoMethod index "ApiMethod": "cpp", "RotatingBuffer": 0, }, "TuningParameters": { "SplitK": [0] }, "ProblemSizes": []} defaultCreateLogicOptions = {} # Currently unused * ``TestConfig`` 1. ``ColdIter``: This is number the warm-up iterations before starting the kernel benchmark. 2. ``Iter``: This is the number of iterations in kernel benchmarking 3. ``AlgoMethod``: We recommended to keep this unchanged because method "all" returns all the available solutions for the problem type. 4. ``ApiMethod``: We have c, mix, and cpp. Doesn't affect the result much. 5. ``RotatingBuffer``: This is a size in the unit of MB. Recommended to set the value equal to the size of the cache of the card to avoid the kernel fetching data from the cache. * ``TuningParameters`` ``SplitK``: Divide ``K`` into ``N`` portions. Not every solution supports ``SplitK``. The solution will be skipped if not supported. * ``CreateLogic`` Currently no control parameters. hipBLASLt backend assembly generator tuning ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :doc:`hipBLASLt ` has a backend assembly generator in `hipBLASLt's GitHub repository `_, named TensileLite. TensileLite enables performance optimization by tuning the backend assembly generator. The following section explains how to use TensileLite to tune hipBLASLt for better performance. .. code-block:: shell cd /hipBLASLt/tensilelite ./Tensile/bin/Tensile config.yaml output_path config.yaml ''''''''''' This file contains the parameters and settings for the tuning process. Here’s a breakdown of the important sections: ``GlobalParameters`` The set of parameters which provides context for the entire tuning exercise. Using ``0`` for ``NumElementsToValidate`` is suggested for performance tuning to avoid validation overhead. .. code-block:: python globalParameters["NumElementsToValidate"] = 0 ``BenchmarkProblems`` Defines the set of kernel specifications as well as the size definitions for the tuning exercise. * ``ProblemType`` (``OperationType``, ``DataType``, ``TransposeA``, ``TransposeB``) * ``BenchmarkCommonParameters`` (the same parameters for all solutions) * ``ForkParameters`` * ``BenchmarkFinalParameters`` (``ProblemSizes``) ``LibraryLogic`` Specifies the target environment and platform. * ``ScheduleName`` * ``aldebaran`` is MI200 * ``aquavanjaram`` is MI300 .. code-block:: shell $ ls aldebaran aquavanjaram navi31 navi32 .. code-block:: yaml LibraryLogic: ScheduleName: "aldebaran" DeviceNames: [Device 0050, Device 0052, Device 0054, Device 0062, Device 7400] ArchitectureName: "gfx90a" ``LibraryClient`` If defined, this will enable step 4 of the tuning process, which means the final library will be created. .. code-block:: shell $ ls aldebaran_Cijk_Ailk_Bjlk_S.yaml TensileLite tuning flow ------------------------ The TensileLite tuning flow consists of seven steps. In the first six steps, the programmable benchmarking protocol generates fast kernel candidates. In the final step (:ref:`step 7 `), these candidates are benchmarked against a predefined set of problem sizes. .. _tensilelite-tuning-flow-fig: .. figure:: ../../../data/how-to/tuning-guides/tensilelite-tuning-flow.png :align: center :alt: TensileLite tuning flow .. _tensilelite-tuning-step-1: Step 1: Initial solution parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Before Tensile is able to benchmark a kernel parameter in Step 2 of the :ref:`preceding figure `, such as ``PrefetchGlobalRead={False, True}``, all other kernel parameters not being measured must be specified. Therefore, the first step is to initialize a list of default kernel parameters, then subsequent steps of benchmarking will override a parameter from this default list, with the parameter determined from benchmarking. Tensile is pre-loaded with default parameters for any unspecified during tuning. Step 2: Benchmark common parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Benchmarking common parameters determines parameters which are universally preferable to their alternatives regardless of other parameters. To benchmark common parameters: * User specifies parameters and values to benchmark. * Tensile benchmarks all parameter combinations for a user-specified problem size. * Tensile selects the fastest parameter combination which is now labeled determined and will subsequently be used. In practice, these parameters are not used, since globally preferred parameters are set as defaults in Tensile and do not need to be re-measured. Step 3: Fork parameters ^^^^^^^^^^^^^^^^^^^^^^^ Rather than continuing to determine globally fastest parameters, which eventually leads to a single fastest kernel, forking creates many different kernels, all of which will be considered for use. All forked parameters are considered determined, i.e., they aren't measured to determine which is fastest. The :ref:`preceding figure ` shows 7 kernels being forked in Step 3. Step 4: Benchmark fork parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Next, tuning continues its refinement by determining fastest parameters for each forked permutation, same as in Step 2. Step 5: Join parameters ^^^^^^^^^^^^^^^^^^^^^^^ After tuning the forked kernels, joining reduces the list of kernels so that fewer kernels will be considered for final use. Each kernel in the resulting list must have different values for the listed ``JoinParameters``, for example, employing ``JoinParameters`` = ``MacroTile`` will result in only a few final kernels, each with a different ``MacroTile``. If there are multiple kernels with the same ``MacroTile``, only the fastest is kept. In the above figure the 7 forked kernel have been reduced to 3 joined kernels. Step 6: Benchmark join parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Users can further tune parameters of the joined kernels. This steps is same as Steps 4 except that it tunes after joining so that there are fewer kernels to be tuned. In practice, this step is not used; using Step 4 is preferred so that all parameters are measured before joining. .. _tensilelite-tuning-step-7: Step 7: Benchmark final parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ At the conclusion of Step 6, all parameters of all kernels have been determined and the final set of kernels for consideration has been established. Now all final kernels will be measured against all problem sizes specified by the user. Problem sizes can be specified as Range sizes and Exact sizes. Range sizes cause benchmarking of a broad range of sizes, and Tensile will be able to interpolate which kernel is best even between the specifically measured sizes. Exact sizes cause a single problem size to be measured, and the final library is guaranteed to choose the fastest kernel for that size. This final benchmarking generates the data that is subsequently analyzed for creating the mapping of problem size to optimal kernel. Update logic YAML files ------------------------ The logic YAML files in hipBLASLt are located in ``library/src/amd_detail/rocblaslt/src/Tensile/Logic/asm_full/``. To merge the YAML files from the tuned results in TensileLite, use the ``merge.py`` located in ``tensilelite/Tensile/Utilities`` with the following command: .. code-block:: shell merge.py original_dir new_tuned_yaml_dir output_dir The following table describes the logic YAML files. +----------------+------------------------------------------------------+ | Logic YAML | Description | +================+======================================================+ | ``Equality`` | Update the equality file when your tuned YAML is | | | an exact tuning. | +----------------+------------------------------------------------------+ | ``GridBased`` | Update the gridbased file when your tuned YAML is | | | a grid-based tuning. | +----------------+------------------------------------------------------+ | ``FreeSize`` | Update the freesize file when your tuned YAML | | | contains confidential sizes, or others. Note that | | | freesize YAML files do not require any problem size. | +----------------+------------------------------------------------------+ Tensile optimization and performance tuning tips ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ MI16x16 versus MI32x32 MI16x16 outperforms MI32x32 due to its superior power efficiency. The MI16x16 format refers to the ``v_mfma`` instruction (such as ``v_mfma_f32_16x16x16f16``). See ``__. Clock differences among XCDs There can be a clock speed variation of 3% to 10% among different XCDs. Typically, XCD0 has the highest clock speed, while XCD7 has the lowest on MI300X. For optimal efficiency calculations on MI300X, use the XCD with the lowest average clock speed. If the average clock speed of XCD0 is used, target efficiencies (such as, 95% for DGEMM HPL cases with K=512) may not be achievable. `WorkGroupMapping` To maximize L2 cache efficiency, use multiples of the XCD number. For MI300X, this means using multiples of 8 (such as, 24, 32, 40). GEMM stride issues On MI300, if the matrix stride in GEMM is a multiple of 512 bytes, it can lead to Tagram channel hotspotting issues, causing a significant performance drop, especially for TN transpose cases. This can increase the latency of VMEM instructions and cause a notable performance drop. To avoid this, use stride padding to ensure the stride is not a multiple of 512 bytes (for instance, for TN F16 GEMM, set ``lda = ldb = K + 128`` when ``K % 256 == 0``). .. _mi300x-ck: Optimizing Composable Kernel GEMM kernels ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The performance of a GEMM kernel is significantly influenced by the input values. The performance hierarchy based on input value types, from highest to lowest, is as follows: * Case 1: [all 0] * Case 2: [all identical integers] * Case 3: [random integers] * Case 4: [random floats] There can be more than a 20 percent performance drop between Case 1 and Case 4, and a 10 percent drop between random integers and random floats. Additionally, ``bf16`` matrix core execution is noticeably faster than ``f16``. Distributing workgroups with data sharing on the same XCD can enhance performance (reduce latency) and improve benchmarking stability. CK provides a rich set of template parameters for generating flexible accelerated computing kernels for difference application scenarios. See :doc:`optimizing-with-composable-kernel` for an overview of Composable Kernel GEMM kernels, information on tunable parameters, and examples. .. _mi300x-miopen: MIOpen ------ MIOpen is AMD's open-source, deep learning primitives library for GPUs. It implements fusion to optimize for memory bandwidth and GPU launch overheads, providing an auto-tuning infrastructure to overcome the large design space of problem configurations. Convolution ^^^^^^^^^^^ Many of MIOpen kernels have parameters which affect their performance. Setting these kernel parameters to optimal values for a given convolution problem, allows reaching the best possible throughput. The optimal values of these kernel parameters are saved in PerfDb (Performance database). PerfDb is populated through tuning. To manipulate the tuning level, use the environment variable ``MIOPEN_FIND_ENFORCE`` (1-6). Optimal values of kernel parameters are used to benchmark all applicable convolution kernels for the given convolution problem. These values reside in the FindDb. To manipulate how to find the best performing kernel for a given convolution problem, use the environment variable ``MIOPEN_FIND_MODE`` (1-5). .. _mi300x-miopen-tuning: Tuning in MIOpen ^^^^^^^^^^^^^^^^ ``MIOPEN_FIND_ENFORCE=DB_UPDATE``, ``2`` Performs auto-tuning and update to the PerfDb. ``MIOPEN_FIND_ENFORCE=SEARCH``, ``3`` Only perform auto-tuning if PerfDb does not contain optimized value for a given convolution problem What does :doc:`PerfDb ` look like? .. code-block:: [ 2x128x56xNHWCxF, [ ConvAsm1x1U : 1,8,2,64,2,4,1,8 ; // optimum kernel params for convolution problem 2x128x56xNHWCxF ConvOclDirectFwd1x1 : 1,128,1,1,0,2,32,4,0; // optimum kernel params for convolution problem 2x128x56xNHWCxF ], 2x992x516xNHWCxF, [ ConvAsm1x1U : 64,18,2,64,2,4,41,6 ; // optimum kernel params for convolution problem 2x992x516xNHWCxF ConvOclDirectFwd1x1 : 54,128,21,21,1,23,32,4,0 // optimum kernel params for convolution problem 2x992x516xNHWCxF ] ... ] See :doc:`miopen:conceptual/perfdb` for more information. Finding the fastest kernel ^^^^^^^^^^^^^^^^^^^^^^^^^^ ``MIOPEN_FIND_MODE=NORMAL``, ``1`` Benchmark all the solvers and return a list (front element is the fastest kernel). ``MIOPEN_FIND_MODE=FAST``, ``2`` Check FindDb (Find database) if convolution problem is found return - else immediate fallback mode (predict the performing kernel parameters based on mathematical and AI models). ``MIOPEN_FIND_MODE=HYBRID``, ``3`` Check FindDb if convolution problem is found return - else benchmark that problem. What does :doc:`FindDb ` look like? .. code-block:: [ 2x128x56xNHWCxF, [ ConvAsm1x1U : 0.045 (time), 12312 (workspace), algo_type; ConvOclDirectFwd1x1 : 1.145 (time), 0 (workspace), algo_type; ], 2x992x516xNHWCxF, [ ConvAsm1x1U : 2.045 (time), 12312 (workspace), algo_type; ConvOclDirectFwd1x1 : 1.145 (time), 0 (workspace), algo_type; ] ... ] See :doc:`miopen:how-to/find-and-immediate` for more information. For example: .. code-block:: shell MIOPEN_FIND_ENFORCE=3 MIOPEN_FIND_MODE=1 ./bin/MIOpenDriver convbfp16 -n 1 -c 1024 -H 14 -W 14 -k 256 -y 1 -x 1 -p 0 -q 0 -u 1 -v 1 -l 1 -j 1 -m conv -g 1 -F 1 .. _mi300x-rccl: RCCL ---- :doc:`RCCL ` is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. RCCL supports an arbitrary number of GPUs installed in a single node or multiple nodes and can be used in either single- or multi-process (such as MPI) applications. The following subtopics include information on RCCL features and optimization strategies: * :ref:`Use all eight GPUs ` * :ref:`Disable NUMA auto-balancing ` * :ref:`Disable ACS for multi-node RCCL ` * :ref:`Run RCCL-Unittests ` * :ref:`NPKit profiler ` * :ref:`RCCL-tests ` * :ref:`Use one-process-per-GPU mode ` * :ref:`RCCL in E2E workloads ` .. _mi300x-rccl-8-gpu: Use all eight GPUs ^^^^^^^^^^^^^^^^^^ In an :ref:`MI300X architecture `, there are dedicated links between each pair of GPUs in a fully connected topology. Therefore, for collective operations, the best performance is achieved when all 8 GPUs and, hence, all the links between them are used. In the case of 2- or 4-GPU collective operations (generally less than 8 GPUs), you can only use a fraction of the potential bandwidth on the node. The following figure shows an :doc:`MI300X node-level architecture ` of a system with AMD EPYC processors in a dual-socket configuration and eight AMD Instinct MI300X GPUs. The MI300X OAMs attach to the host system via PCIe Gen 5 x16 links (yellow lines). The GPUs use seven high-bandwidth, low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected 8-GPU system. .. _mi300x-node-level-arch-fig: .. figure:: ../../../data/shared/mi300-node-level-arch.png MI300 Series node-level architecture showing 8 fully interconnected MI300X OAM modules connected to (optional) PCIe switches via re-timers and HGX connectors. .. _mi300x-rccl-disable-numa: Disable NUMA auto-balancing ^^^^^^^^^^^^^^^^^^^^^^^^^^^ In order to reduce performance variability and also achieve better performance, you need to make sure that NUMA auto-balancing is disabled on the node. Check whether NUMA auto-balancing is disabled, by running the following command: ``cat /proc/sys/kernel/numa_balancing`` and checking whether the output is ``0``. If the output is ``1``, you can disable NUMA auto-balancing by running the following command: ``sudo sysctl kernel.numa_balancing=0``. For more details, see `AMD Instinct MI300X system optimization `_. .. _mi300x-rccl-disable-acs: Disable ACS for multi-node RCCL ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Check if ACS is disabled with ``sudo lspci -vvv \| grep -i "acsctl"``. This will print many lines. Check if there are any that show ``SrcValid+`` If there are any ``SrcValid+``, then use the following ``disable_acs.sh`` script to disable ACS (requires ``sudo``). .. code-block:: shell #!/bin/bash # # Disable ACS on every device that supports it # PLATFORM=$(dmidecode --string system-product-name) logger "PLATFORM=${PLATFORM}" # Enforce platform check here. #case "${PLATFORM}" in #"OAM"*) #logger "INFO: Disabling ACS is no longer necessary for ${PLATFORM}" #exit 0 #;; #*) #;; #esac # must be root to access extended PCI config space if [ "$EUID" -ne 0 ]; then echo "ERROR: $0 must be run as root" exit 1 fi for BDF in \`lspci -d "*:*:*" \| awk '{print $1}'`; do # skip if it doesn't support ACS setpci -v -s ${BDF} ECAP_ACS+0x6.w > /dev/null 2>&1 if [ $? -ne 0 ]; then #echo "${BDF} does not support ACS, skipping" continue fi logger "Disabling ACS on \`lspci -s ${BDF}`" setpci -v -s ${BDF} ECAP_ACS+0x6.w=0000 if [ $? -ne 0 ]; then logger "Error enabling directTrans ACS on ${BDF}" continue fi NEW_VAL=`setpci -v -s ${BDF} ECAP_ACS+0x6.w \| awk '{print $NF}'\` if [ "${NEW_VAL}" != "0000" ]; then logger "Failed to enabling directTrans ACS on ${BDF}" continue fi done exit 0 .. _mi300x-rccl-unittests: Run RCCL-Unittests ^^^^^^^^^^^^^^^^^^ In order to verify RCCL installation and test whether all parts and units of RCCL work as expected you can run the RCCL-Unittests which is explained in ``__. .. _mi300x-rccl-npkit: NPKit profiler ^^^^^^^^^^^^^^ To collect fine-grained trace events in RCCL components, especially in giant collective GPU kernels you can use the NPKit profiler explained in ``__. .. _mi300x-rccl-tests: RCCL-tests ^^^^^^^^^^ RCCL-tests are performance and error-checking tests for RCCL maintained in ``__. These tests are one of the best ways to check the performance of different collectives provided by RCCL. You can select collectives, message sizes, datatypes, operations, number of iterations, etc., for your test, and then rccl-tests deliver performance metrics such as latency, algorithm bandwidth, and bus bandwidth for each case. .. _mi300x-rccl-one-process-per-gpu: Use one-process-per-GPU mode ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RCCL delivers the best performance for collectives when it is configured in a one-process-per-GPU mode. This is due to the fact that for a one-process-per-multiple-GPUs configuration, you can run into kernel launch latency issues. This is because ROCm serializes kernel launches on multiple GPUs from one process which hurts performance. .. _mi300x-rccl-e2e: RCCL in E2E workloads ^^^^^^^^^^^^^^^^^^^^^ Use the following environment variable to increase the number of channels used by RCCL when using RCCL in end-to-end workloads to potentially improve the performance: .. code-block:: text export NCCL_MIN_NCHANNELS=112 .. _mi300x-triton-kernel-performance-optimization: Triton kernel performance optimization ====================================== Triton kernel optimization encompasses a variety of strategies aimed at maximizing the efficiency and performance of GPU computations. These strategies include :ref:`optimizing overall GPU resource utilization `, :ref:`tuning kernel configurations `, and :ref:`leveraging specific hardware features ` to achieve higher throughput and lower latency. .. _mi300x-autotunable-kernel-config: Auto-tunable kernel configurations ---------------------------------- Auto-tunable kernel configuration involves adjusting memory access and computational resources assigned to each compute unit. It encompasses the usage of :ref:`LDS `, register, and task scheduling on a compute unit. The GPU contains global memory, local data share (LDS), and registers. Global memory has high access latency, but is large. LDS access has much lower latency, but is smaller. It is a fast on-CU software-managed memory that can be used to efficiently share data between all work items in a block. Register access is the fastest yet smallest among the three. .. _mi300x-cu-fig: .. figure:: ../../../data/shared/compute-unit.png Schematic representation of a CU in the CDNA2 or CDNA3 architecture. The following is a list of kernel arguments used for tuning performance and resource allocation on AMD GPUs, which helps in optimizing the efficiency and throughput of various computational kernels. ``num_stages=n`` Adjusts the number of pipeline stages for different types of kernels. On AMD GPUs, set ``num_stages`` according to the following rules: * For kernels with a single GEMM, set to ``2``. * For kernels with two GEMMs fused (Flash Attention, or any other kernel that fuses 2 GEMMs), set to ``1``. * For kernels that fuse a single GEMM with another non-GEMM operator (for example ReLU activation), set to ``2``. * For kernels that have no GEMMs, set to ``1``. ``waves_per_eu=n`` Helps to manage Vector General Purpose Registers (VGPR) usage to achieve desired occupancy levels. This argument hints to the compiler to reduce VGPR to achieve ``n`` occupancy where ``n`` is a number. The goal is to achieve a certain occupancy level for each Execution Unit (EU, also called :ref:`SIMD Unit `) to achieve better latency or throughput. For more information on how to compute occupancy, see :ref:`mi300x-compute-kernel-occ`. This argument is useful if: * The occupancy of the kernel is limited by VGPR usage, and * The current VGPR usage is only a few above a boundary in :ref:`Occupancy related to VGPR usage in an Instinct MI300X GPU `. .. _mi300x-occupancy-vgpr-table: .. figure:: ../../../data/shared/occupancy-vgpr.png :alt: Occupancy related to VGPR usage in an Instinct MI300X GPU. :align: center Occupancy related to VGPRs usage on an Instinct MI300X GPU For example, according to the table, each Execution Unit (EU) has 512 available VGPRs, which are allocated in blocks of 16. If the current VGPR usage is 170, it will be rounded up to 176 due to the allocation granularity. In this case, the occupancy is limited to 2 waves per EU because :math:`176 \times 3 > 512`. So, if you set ``waves_per_eu`` to 3, the LLVM backend will attempt to reduce VGPR usage so that it might fit 3 waves per EU. ``BLOCK_M``, ``BLOCK_N``, ``BLOCK_K`` Tile sizes to be tuned to balance the memory-to-computation ratio. The goal is to minimize the memory transfer from global to shared and reuse memory across different threads. This needs to be tuned. The tile sizes should be large enough to maximize the efficiency of the memory-to-computation ratio but small enough to parallelize the greatest number of workgroups at the grid level. ``matrix_instr_nonkdim`` Experimental feature for Flash Attention-like kernels that determines the size of the Matrix Fused Multiply-Add (MFMA) instruction used. - ``matrix_instr_nonkdim = 16``: ``mfma_16x16`` is used. - ``matrix_instr_nonkdim = 32``: ``mfma_32x32`` is used. For GEMM kernels on an MI300X GPU, ``mfma_16x16`` typically outperforms ``mfma_32x32``, even for large tile/GEMM sizes. .. _mi300x-triton-gpu-utilization: Overall GPU resource utilization -------------------------------- As depicted in the following figure, each XCD in :doc:`MI300X ` contains 40 compute units (CUs), with 38 active. Each MI300X contains eight vertical XCDs, and a total of 304 active compute units capable of parallel computation. The first consideration is the number of CUs a kernel can distribute its task across. .. figure:: ../../../data/shared/xcd-sys-arch.png XCD-level system architecture showing 40 compute units, each with 32 KB L1 cache, a unified compute system with 4 ACE compute GPUs, shared 4MB of L2 cache, and a hardware scheduler (HWS). You can query hardware resources with the command ``rocminfo`` in the ``/opt/rocm/bin`` directory. For instance, query the number of CUs, number of SIMD, and wavefront size using the following commands. .. code-block:: shell rocminfo | grep "Compute Unit" rocminfo | grep "SIMD" rocminfo | grep "Wavefront Size" For the MI300X, the goal is to have a minimum of 1024 thread blocks or workgroups in the grid (kernel), with a preference for more. Identifying additional parallelism within the algorithm is necessary to enhance GPU utilization. For more information and examples, see `Accelerating A Triton Fused Kernel For W4a16 Quantized Inference With SplitK Work Decomposition `__. .. _mi300x-mlir-analysis: MLIR analysis ------------- Triton includes the following layouts: **blocked**, **shared**, **sliced**, and **MFMA**. Use the Triton GPU Intermediate Representation (IR) to identify the memory in which each computation takes place. Use the environment variable ``MLIR_ENABLE_DUMP`` to dump MLIR: .. code-block:: shell export MLIR_ENABLE_DUMP=1 The following is a snippet of IR from the Flash Attention decode ``int4`` KV program. It is to de-quantize the ``int4`` key-value from the ``int4`` data type to ``fp16``. .. code-block:: text %190 = tt.load %189 {cache = 1 : i32, evict = 1 : i32, isVolatile = false} : tensor<1x64xi32, #blocked6> loc(#loc159) %266 = arith.andi %190, %cst_28 : tensor<1x64xi32, #blocked6> loc(#loc250) %267 = arith.trunci %266 : tensor<1x64xi32, #blocked6> to tensor<1x64xi16, #blocked6> loc(#loc251) %268 = tt.bitcast %267 : tensor<1x64xi16, #blocked6> -> tensor<1x64xf16, #blocked6> loc(#loc252) %269 = triton_gpu.convert_layout %268 : (tensor<1x64xf16, #blocked6>) -> tensor<1x64xf16, #shared1> loc(#loc252) %270 = tt.trans %269 : (tensor<1x64xf16, #shared1>) -> tensor<64x1xf16, #shared2> loc(#loc194) %276 = triton_gpu.convert_layout %270 : (tensor<64x1xf16, #shared2>) -> tensor<64x1xf16, #blocked5> loc(#loc254) %293 = arith.mulf %276, %cst_30 : tensor<64x1xf16, #blocked5> loc(#loc254) %295 = arith.mulf %292, %294 : tensor<64x32xf16, #blocked5> loc(#loc264) %297 = arith.addf %295, %296 : tensor<64x32xf16, #blocked5> loc(#loc255) %298 = triton_gpu.convert_layout %297 : (tensor<64x32xf16, #blocked5>) -> tensor<64x32xf16, #shared1> loc(#loc255) %299 = tt.trans %298 : (tensor<64x32xf16, #shared1>) -> tensor<32x64xf16, #shared2> loc(#loc196) %300 = triton_gpu.convert_layout %299 : (tensor<32x64xf16, #shared2>) -> tensor<32x64xf16, #triton_gpu.dot_op<{opIdx = 1, parent = #mfma, kWidth = 4}>> loc(#loc197) From the IR snippet, you can see ``i32`` data is loaded from global memory to registers (``%190``). With a few element-wise operations in registers, it is stored in shared memory (``%269``) for the transpose operation (``%270``), which needs data movement across different threads. With the transpose done, it is loaded from LDS to register again (``%276``), and with a few more element-wise operations, it is stored to LDS again (``%298``). The last step loads from LDS to registers and converts to the dot-operand layout (``%300``). The IR snippet uses the LDS twice. The first is for the transpose, and the second is to convert a blocked layout to a dot operand layout. There’s an opportunity to optimize performance by using LDS once. .. _mi300x-assembly-analysis: ISA assembly analysis --------------------- To generate ISA, ``export AMDGCN_ENABLE_DUMP=1`` when running the Triton program. The generated ISA will be printed as standard output. You can dump it to a file for analysis. * Ensure ``global_load_dwordx4`` is used in the ISA, especially when the global memory load happens in the loop. * In most cases, the LDS load and store should use ``_b128`` to minimize the number of LDS access instructions. * The AMD ISA has ``s_waitcnt`` instruction to synchronize the dependency of memory access and computations. The ``s_waitcnt`` instructions can typically have two signals in the Triton context: * ``lgkmcnt(n)``: ``lgkm`` stands for LDS, GDS (Global Data Share), Constant, and Message. It is often related to LDS access. The ``n`` indicates the number of data accesses can still be ongoing before moving on to the next step. For example, if ``n`` is ``0``, wait for all ``lgkm`` access to finish before continuing. If ``n`` is ``1``, move on even if ``1`` ``lgkm`` access is still running asynchronously. * ``vmcnt(n)``: ``vm`` represents vector memory. This happens when vector memory is accessed, for example, when global load moves from global memory to vector memory. The variable ``n`` is the same as the previous setting. Generally recommended guidelines are as follows. * Vectorize memory access as much as possible. * Ensure synchronization is done efficiently. * Overlap of instructions to hide latency, but it requires thoughtful analysis of the algorithms. * If you find inefficiencies, you can trace it back to LLVM IR, TTGIR and even TTIR to see where the problem comes from. If you find it during compiler optimization, activate the MLIR dump (``export MLIR_ENABLE_DUMP=1``) and check which optimization pass caused the problem. .. _mi300x-hip-optimization: HIP performance optimization ============================ This section summarizes the best practices described in the :doc:`Performance guidelines ` section of the HIP documentation. Optimization areas of concern include: * Parallel execution * Memory usage optimization * Optimization for maximum throughput * Minimizing memory thrashing Parallel execution and GPU hardware utilization ----------------------------------------------- The application should reveal and efficiently imply as much parallelism as possible for optimal use to keep all system components active. Memory usage optimization ------------------------- To optimize memory throughput, minimize low-bandwidth data transfers, particularly between the host and device. Maximize on-chip memory, including shared memory and caches, to reduce data transfers between global memory and the device. In a GPU, global memory has high latency but a large size, while local data share (LDS) has lower latency but a smaller size, and registers have the fastest but smallest access. Aim to limit load/store operations in global memory. If multiple threads in a block need the same data, transfer it from global memory to LDS for efficient access. See :doc:`HIP's performance guidelines ` for greater detail. Diagnostic and performance analysis =================================== .. _mi300x-rocr-debug-agent: Debug memory access faults -------------------------- Identifying a faulting kernel is often enough to triage a memory access fault. The ROCr Debug Agent can trap a memory access fault and provide a dump of all active wavefronts that caused the error, as well as the name of the kernel. For more information, see :doc:`ROCr Debug Agent documentation `. To summarize, the key points include: 1. Compiling with ``-ggdb -O0`` is recommended but not required. 2. ``HSA_TOOLS_LIB=/opt/rocm/lib/librocm-debug-agent.so.2 HSA_ENABLE_DEBUG=1 ./my_program`` When the debug agent traps the fault, it produces verbose output of all wavefront registers and memory content. Importantly, it also prints something similar to the following: .. code-block:: text Disassembly for function vector_add_assert_trap(int*, int*, int*): code object: file:////rocm-debug-agent/build/test/rocm-debug-agent-test#offset=14309&size=31336 loaded at: [0x7fd4f100c000-0x7fd4f100e070] The kernel name and the code object file should be listed. In the example above, the kernel name is vector_add_assert_trap, but this might also look like: .. code-block:: text Disassembly for function memory:///path/to/codeobject#offset=1234&size=567: In this case, it's an in-memory kernel that was generated at runtime. Using the environment variable ``ROCM_DEBUG_AGENT_OPTIONS="--all --save-code-objects"`` will have the debug agent save all code objects to the current directory. Use ``--save-code-objects=[DIR]`` to save them in another location. The code objects will be renamed from the URI format with special characters replaced by ‘_’. Use ``llvm-objdump`` to disassemble the indicated in-memory code object that has been saved to disk. The name of the kernel is often found in the disassembled code object. .. code-block:: shell llvm-objdump --disassemble-all path/to/code-object.co Disabling memory caching strategies within the ROCm stack and PyTorch is recommended, where possible. This gives the debug agent the best chance of finding the memory fault where it originates. Otherwise, it could be masked by writing past the end of a cached block within a larger allocation. .. code-block:: text PYTORCH_NO_HIP_MEMORY_CACHING=1 HSA_DISABLE_FRAGMENT_ALLOCATOR=1 .. _mi300x-compute-kernel-occ: Compute the occupancy of a kernel --------------------------------- 1. Get the VGPR count, search for ``.vgpr_count`` in the ISA (for example, ``N``). 2. Get the allocated LDS following the steps (for example, L for the kernel). a. ``export MLIR_ENABLE_DUMP=1`` b. ``rm -rf ~/.triton/cache`` c. ``python kernel.py | | grep "triton_gpu.shared = " | tail -n 1`` d. You should see something like ``triton_gpu.shared = 65536``, indicating 65536 bytes of LDS are allocated for the kernel. 3. Get number of waves per workgroup using the following steps (for example, ``nW``). a. ``export MLIR_ENABLE_DUMP=1`` b. ``rm -rf ~/.triton/cache`` c. ``python kernel.py | | grep "triton_gpu.num-warps " | tail -n 1`` d. You should see something like ``“triton_gpu.num-warps" = 8``, indicating 8 waves per workgroup. 4. Compute occupancy limited by VGPR based on N according to the :ref:`preceding table `. For example, waves per EU as ``occ_vgpr``. 5. Compute occupancy limited by LDS based on L by: ``occ_lds = floor(65536 / L)``. 6. Then the occupancy is ``occ = min(floor(occ_vgpr * 4 / nW), occ_lds) * nW / 4`` a. ``occ_vgpr \* 4`` gives the total number of waves on all 4 execution units (SIMDs) per CU. b. ``floor(occ_vgpr * 4 / nW)`` gives the occupancy of workgroups per CU regrading VGPR usage. c. The true ``occ`` is the minimum of the two. Find the full ``occ.sh`` at ``__. Special considerations ====================== Multi-GPU communications ------------------------ Because of the characteristics of MI300X inter-GPU communication and limitation of bandwidth between and among 2 GPUs and 4 GPUs, avoid running workloads that use 2 or 4 GPU collectives. It's optimal to either use a single GPU (where no collective is required) or employ 8 GPU collectives. Multi-node FSDP and RCCL settings --------------------------------- When using PyTorch's FSDP (Full Sharded Data Parallel) feature, the HIP streams used by RCCL and HIP streams used for compute kernels do not always overlap well. As a workaround, it's recommended to use high-priority HIP streams with RCCL. To configure high-priority streams: - Set environment variable ``TORCH_NCCL_HIGH_PRIORITY=1`` to force all RCCL streams to be high-priority. - Set environment variable ``GPU_MAX_HW_QUEUES=2`` via the HIP runtime library. Hardware efficiency is maximized with 4 or fewer HIP streams. These environment variables limit the configuration to two compute streams and two RCCL streams, aligning with this best practice. Additionally, RCCL is often pre-optimized for MI300 systems in production by querying the node topology during startup, reducing the need for extensive manual tuning. Further reading =============== * :doc:`vllm-optimization` --- .. meta:: :description: How to install ROCm and popular deep learning frameworks. :keywords: ROCm, AI, LLM, train, fine-tune, FSDP, DeepSpeed, LLaMA, tutorial .. _rocm-for-ai-install: ******************************************** Installing ROCm and deep learning frameworks ******************************************** Before getting started, install ROCm and supported deep learning frameworks. .. grid:: 1 .. grid-item-card:: Pre-install Each release of ROCm supports specific hardware and software configurations. Before installing, consult the :doc:`System requirements ` and :doc:`Installation prerequisites ` guides. If you’re new to ROCm, refer to the :doc:`ROCm quick start install guide for Linux `. If you’re using a Radeon GPU for graphics-accelerated applications, refer to the `Radeon installation instructions `_. You can install ROCm on :doc:`compatible systems ` via your Linux distribution's package manager. See the following documentation resources to get started: * :doc:`ROCm installation overview ` * :doc:`Using your Linux distribution's package manager ` * :ref:`Multi-version installation ` .. grid:: 1 .. grid-item-card:: Post-install Follow the :doc:`post-installation instructions ` to configure your system linker, PATH, and verify the installation. If you encounter any issues during installation, refer to the :doc:`Installation troubleshooting ` guide. Deep learning frameworks ======================== ROCm supports deep learning frameworks and libraries including `PyTorch `_, `TensorFlow `_, `JAX `_, and more. Review the :doc:`framework installation documentation <../deep-learning-rocm>`. For ease-of-use, it's recommended to use official ROCm prebuilt Docker images with the framework pre-installed. Next steps ========== After installing ROCm and your desired ML libraries -- and before running AI workloads -- conduct system health benchmarks to test the optimal performance of your AMD hardware. See :doc:`system-setup/index` to get started. --- .. meta:: :description: System setup and validation steps for AI training and inference on ROCm :keywords: AMD Instinct, ROCm, GPU, AI, training, inference, benchmarking, performance, validation ************************************* System setup for AI workloads on ROCm ************************************* Before you begin training or inference on AMD Instinct™ GPUs, complete the following system setup and validation steps to ensure optimal performance. Prerequisite system validation ============================== First, confirm that your system meets all software and hardware prerequisites. See :doc:`prerequisite-system-validation`. Docker images for AMD Instinct GPUs =================================== AMD provides prebuilt Docker images for AMD Instinct™ MI300X and MI325X GPUs. These images include ROCm-enabled deep learning frameworks and essential software components. They support single-node and multi-node configurations and are ready for training and inference workloads out of the box. Multi-node training ------------------- For instructions on enabling multi-node training, see :doc:`multi-node-setup`. System optimization and validation ================================== Before running workloads, verify that the system is configured correctly and operating at peak efficiency. Recommended steps include: - Disabling NUMA auto-balancing - Running system benchmarks to validate hardware performance For details on running system health checks, see :doc:`system-health-check`. --- .. meta:: :description: Multi-node setup for AI training :keywords: gpu, system, health, validation, bench, perf, performance, rvs, rccl, babel, mi300x, mi325x, flops, bandwidth, rbt, training .. _rocm-for-ai-multi-node-setup: ********************************* Multi-node setup for AI workloads ********************************* AMD provides ready-to-use Docker images for AMD Instinct™ MI300X and MI325X GPUs containing ROCm-capable deep learning frameworks and essential software components. These Docker images can run and leverage multiple nodes if they are available. This page describes how to enable the multi-node training of AI workloads on AMD Instinct GPUs. Prerequisites ============= Before starting, ensure your environment meets the following requirements: * Multi-node networking: your cluster should have a configured multi-node network. For setup instructions, see the `Multi-node network configuration for AMD Instinct GPUs `__ guide in the Instinct documentation. * ROCm Docker container to simplify environment setup for AI workloads. See the following resources to get started: * :doc:`Training a model with Megatron-LM and ROCm <../training/benchmark-docker/megatron-lm>` * :doc:`Training a model with PyTorch and ROCm <../training/benchmark-docker/pytorch-training>` * :doc:`Training a model with JAX MaxText and ROCm <../training/benchmark-docker/jax-maxtext>` * Slurm workload manager to run the :ref:`provided examples `. Install required packages ========================= To run multi-node workloads, ensure you have all the required packages installed based on your network device. For example, on Ubuntu systems: .. code-block:: shell apt install -y iproute2 apt install -y linux-headers-"$(uname -r)" libelf-dev apt install -y gcc make libtool autoconf librdmacm-dev rdmacm-utils infiniband-diags ibverbs-utils perftest ethtool libibverbs-dev rdma-core strace libibmad5 libibnetdisc5 ibverbs-providers libibumad-dev libibumad3 libibverbs1 libnl-3-dev libnl-route-3-dev Compile and install the RoCE library ------------------------------------ If you're using Broadcom NICs, you need to compile and install the RoCE (RDMA over Converged Ethernet) library. See `RoCE cluster network configuration guide for AMD Instinct GPUs `__ for more information. See the `Ethernet networking guide for AMD Instinct MI300X GPU clusters: Compiling Broadcom NIC software from source `_ for more details. .. important:: It is crucial to install the exact same version of the RoCE library that is installed on your host system. Also, ensure that the path to these libraries on the host is correctly mounted into your Docker container. Failure to do so can lead to compatibility issues and communication failures. 1. Set ``BUILD_DIR`` to the path on the host system where the Broadcom drivers and ``bnxt_rocelib`` source are located. Then, navigate to the ``bnxt_rocelib`` directory. .. code-block:: shell export BUILD_DIR=/path/to/your/broadcom_drivers_on_host cd $BUILD_DIR/drivers_linux/bnxt_rocelib/ 2. The ``bnxt_rocelib`` directory contains a version of ``libbnxt_re`` in a zipped ``.tar.gz`` file. .. code-block:: shell tar -xf libbnxt_re-a.b.c.d.tar.gz cd libbnxt_re-a.b.c.d 3. Compile and install the RoCE library. .. code-block:: shell sh autogen.sh ./configure make find /usr/lib64/ /usr/lib -name "libbnxt_re-rdmav*.so" -exec mv {} {}.inbox \; make install all sh -c "echo /usr/local/lib >> /etc/ld.so.conf" ldconfig cp -f bnxt_re.driver /etc/libibverbs.d/ find . -name "*.so" -exec md5sum {} \; BUILT_MD5SUM=$(find . -name "libbnxt_re-rdmav*.so" -exec md5sum {} \; | cut -d " " -f 1) Environment setup ================= Before running multi-node workloads, set these essential environment variables: Master address -------------- By default, ``localhost`` is used for single-node configurations. Change ``localhost`` to the master node's resolvable hostname or IP address: .. code-block:: bash export MASTER_ADDR="${MASTER_ADDR:-localhost}" Number of nodes --------------- Set the number of nodes you want to train on (for example, ``2``, ``4``, or ``8``): .. code-block:: bash export NNODES="${NNODES:-}" Node ranks ---------- Set the rank of each node (``0`` for master, ``1`` for the first worker node, and so on). Node ranks should be unique across all nodes in the cluster. .. code-block:: bash export NODE_RANK="${NODE_RANK:-}" Network interface ----------------- Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface (status "UP") with an IP address in the same subnet as your other nodes. Then, update the following variable in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 This variable specifies which network interface to use for inter-node communication. Setting this variable to the incorrect interface can result in communication failures or significantly reduced performance. .. tip:: This command sets ``NCCL_SOCKET_IFNAME``'s value to the last RDMA interface. .. code-block:: bash export NCCL_SOCKET_IFNAME=$(rdma link show | awk '{print $NF}' | sort | tail -n1) RDMA/IB interface ----------------- Set the RDMA interfaces to be used for communication. NICs can come from different vendors and the names of the RDMA interface can be different. To get the list of all the RDMA/IB devices, run: .. code-block:: bash ibv_devices The command below gets the list of all RDMA/IB devices and puts them in a comma-separated format. If (``rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7``) are your RDMA interfaces, then set: .. code-block:: bash # If using Broadcom NIC export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 # If using Mellanox NIC # export NCCL_IB_HCA=mlx5_0,mlx5_1,mlx5_2,mlx5_3,mlx5_4,mlx5_5,mlx5_8,mlx5_9 .. tip:: Alternatively, if you want to choose the RDMA interface automatically, you can use the following. This command will sort the RDMA interfaces and then select the first eight RDMA interfaces. .. code-block:: bash export NCCL_IB_HCA=$(ibv_devices | awk 'NR>2 {print $1}' | sort | head -n 8 | paste -sd,) Global ID index --------------- Update the global ID index if you're using RoCE. .. code-block:: bash export NCCL_IB_GID_INDEX=3 .. _multi-node-setup-training-examples: Multi-node training examples ============================ The following examples use the Slurm workload manager to launch jobs on multiple nodes. To run these scripts as-is, you must have a Slurm environment configured. The scripts are designed to work with both Broadcom Thor 2 and Mellanox NICs by automatically installing the required libraries and setting the necessary environment variables. For systems with Broadcom NICs, the scripts assume the host's RoCE library is located in the ``/opt`` directory. The following benchmarking examples demonstrate the training of a Llama 3 8B model across multiple 8-GPU nodes, using FSDP for intra-node parallelism and DP for inter-node parallelism. .. _rocm-for-ai-multi-node-setup-jax-train-example: JAX MaxText ----------- 1. Download the desired multi-node benchmarking script from ``__. .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/MAD/refs/heads/develop/scripts/jax-maxtext/gpu-rocm/llama3_8b_multinode.sh Or clone the ``__ repository. .. code-block:: shell git clone https://github.com/ROCm/MAD cd scripts/jax-maxtext/gpu-rocm 2. Run the benchmark for multi-node training. .. code-block:: shell sbatch -N llama3_8b_multinode.sh .. _rocm-for-ai-multi-node-setup-pyt-train-example: PyTorch training ---------------- .. note:: The ROCm PyTorch Training Docker image now focuses on :doc:`Training a model with Primus and PyTorch <../training/benchmark-docker/primus-pytorch>`. The following example refers to the legacy workflow :ref:`Training a model with PyTorch `. 1. Download the ``run_multinode_train.sh`` benchmarking script from ``__. .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/MAD/refs/heads/develop/scripts/pytorch_train/run_multinode_train.sh Or clone the ``__ repository. .. code-block:: shell git clone https://github.com/ROCm/MAD cd scripts/pytorch_train 2. Run the benchmark for multi-node training. .. code-block:: shell sbatch -N run_multinode_train.sh .. seealso:: See :ref:`Training a model with PyTorch ` for more examples and information. Megatron-LM ----------- .. note:: The Megatron-LM Docker image now focuses on :ref:`Training a model with Primus and Megatron `. The following example refers to the legacy Megatron-LM :ref:`Training a model with Megatron-LM ` and might have limited support. 1. Download the ``train_llama_slurm.sh`` benchmarking script from ``__. 2. Set the network interface parameters as per the above guidelines and run the script. .. code-block:: shell cd
export NETWORK_INTERFACE=$NCCL_SOCKET_IFNAME export NCCL_IB_HCA=$NCCL_IB_HCA export IMAGE=docker.io/rocm/megatron-lm:latest OR your preferred image export DATA_CACHE_PATH=/nfs/mounted/repo sbatch –N examples/llama/train_llama_slurm.sh 2. For example, to run a Llama 3 8B workload in BF16 precision, use the following command. .. code-block:: shell MODEL_NAME=llama3 sbatch –N 8 examples/llama/train_llama_slurm.sh 8 2 128 8192 0 0 # Other parameters, such as TP, FP8 datatype, can be adjusted in the script. Further reading =============== * `Multi-node network configuration for AMD Instinct GPUs `__ * `Ethernet networking guide for AMD Instinct MI300X GPU clusters: Compiling Broadcom NIC software from source `__ --- .. meta:: :description: Prerequisite system validation before using ROCm for AI. :keywords: ROCm, AI, LLM, train, megatron, Llama, tutorial, docker, torch, pytorch, jax .. _train-a-model-system-validation: .. _rocm-for-ai-system-optimization: ********************************************************** Prerequisite system validation before running AI workloads ********************************************************** Complete the following system validation and optimization steps to set up your system before starting training and inference. Disable NUMA auto-balancing --------------------------- Generally, application performance can benefit from disabling NUMA auto-balancing. However, it might be detrimental to performance with certain types of workloads. Run the command ``cat /proc/sys/kernel/numa_balancing`` to check your current NUMA (Non-Uniform Memory Access) settings. Output ``0`` indicates this setting is disabled. If there is no output or the output is ``1``, run the following command to disable NUMA auto-balancing. .. code-block:: shell sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' See `Disable NUMA auto-balancing `_ in the Instinct documentation for more information. Hardware verification with ROCm ------------------------------- Use the command ``rocm-smi --setperfdeterminism 1900`` to set the max clock speed up to 1900 MHz instead of the default 2100 MHz. This can reduce the chance of a PCC event lowering the attainable GPU clocks. This setting will not be required for new IFWI releases with the production PRC feature. You can restore this setting to its default value with the ``rocm-smi -r`` command. Run the command: .. code-block:: shell rocm-smi --setperfdeterminism 1900 See `Hardware verfication for ROCm `_ in the Instinct documentation for more information. RCCL Bandwidth Test for multi-node setups ----------------------------------------- ROCm Collective Communications Library (RCCL) is a standalone library of standard collective communication routines for GPUs. See the :doc:`RCCL documentation ` for more information. Before starting pretraining, running a RCCL bandwidth test helps ensure that the multi-GPU or multi-node setup is optimized for efficient distributed training. Running the RCCL bandwidth test helps verify that: - The GPUs can communicate across nodes or within a single node. - The interconnect (such as InfiniBand, Ethernet, or Infinite fabric) is functioning as expected and provides adequate bandwidth for communication. - No hardware setup or cabling issues could affect the communication between GPUs Tuning and optimizing hyperparameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In distributed training, specific hyperparameters related to distributed communication can be tuned based on the results of the RCCL bandwidth test. These variables are already set in the Docker image: .. code-block:: shell # force all RCCL streams to be high priority export TORCH_NCCL_HIGH_PRIORITY=1 # specify which RDMA interfaces to use for communication export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 # define the Global ID index used in RoCE mode export NCCL_IB_GID_INDEX=3 # avoid data corruption/mismatch issue that existed in past releases export RCCL_MSCCL_ENABLE=0 Running the RCCL Bandwidth Test ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It's recommended you run the RCCL bandwidth test before launching training. It ensures system performance is sufficient to launch training. RCCL is not included in the AMD Megatron-LM Docker image; follow the instructions in ``__ to get started. See :ref:`mi300x-rccl` for more information. Run on 8 GPUs (``-g 8``), scanning from 8 bytes to 10 GB: .. code-block:: shell ./build/all_reduce_perf -b 8 -e 10G -f 2 -g 8 .. image:: ../../../data/how-to/rocm-for-ai/rccl-tests-8-gpu.png :width: 800 Using one MPI process per GPU and ``-g 1`` for performance-oriented runs on both single-node and multi-node is recommended. So, a run on 8 GPUs looks something like: .. code-block:: shell mpirun -np 8 --bind-to numa ./build/all_reduce_perf -b 8 -e 10G -f 2 -g 1 .. image:: ../../../data/how-to/rocm-for-ai/rccl-tests-1-mpi-process-per-gpu.png :width: 800 Running with one MPI process per GPU ensures a one-to-one mapping for CPUs and GPUs, which can be beneficial for smaller message sizes. This better represents the real-world use of RCCL in deep learning frameworks like PyTorch and TensorFlow. Use the following script to run the RCCL test for four MI300X GPU nodes. Modify paths and node addresses as needed. .. code-block:: /home/$USER/ompi_for_gpu/ompi/bin/mpirun -np 32 -H tw022:8,tw024:8,tw010:8, tw015:8 \ --mca pml ucx \ --mca btl ^openib \ -x NCCL_SOCKET_IFNAME=ens50f0np0 \ -x NCCL_IB_HCA=rdma0:1,rdma1:1,rdma2:1,rdma3:1,rdma4:1,rdma5:1,rdma6:1,rdma7:1 \ -x NCCL_IB_GID_INDEX=3 \ -x NCCL_MIN_NCHANNELS=40 \ -x NCCL_DEBUG=version \ $HOME/rccl-tests/build/all_reduce_perf -b 8 -e 8g -f 2 -g 1 .. image:: ../../../data/how-to/rocm-for-ai/rccl-tests-4-mi300x-gpu-nodes.png :width: 800 --- :orphan: .. meta:: :description: System health checks with RVS, RCCL tests, BabelStream, and TransferBench to validate AMD hardware performance running AI workloads. :keywords: gpu, accelerator, system, health, validation, bench, perf, performance, rvs, rccl, babel, mi300x, mi325x, flops, bandwidth, rbt, training, inference .. _rocm-for-ai-system-health-bench: ***************************************** System health benchmarks for AI workloads ***************************************** Before running AI workloads, it is important to validate that your AMD hardware is configured correctly and is performing optimally. This topic outlines several system health benchmarks you can use to test key aspects like GPU compute capabilities (FLOPS), memory bandwidth, and interconnect performance. Many of these tests are part of the ROCm Validation Suite (RVS). ROCm Validation Suite (RVS) tests ================================= RVS provides a collection of tests, benchmarks, and qualification tools, each targeting a specific subsystem of the system under test. It includes tests for GPU stress and memory bandwidth. .. _healthcheck-install-rvs: Install ROCm Validation Suite ----------------------------- To get started, install RVS. For example, on an Ubuntu system with ROCm already installed, run the following command: .. code-block:: shell sudo apt update sudo apt install rocm-validation-suite See the `ROCm Validation Suite installation instructions `_, and `System validation tests `_ in the Instinct documentation for more detailed instructions. Benchmark, stress, and qualification tests ------------------------------------------ The GPU stress test runs various GEMM computations as workloads to stress the GPU FLOPS performance and check whether it meets the configured target GFLOPS. Run the benchmark, stress, and qualification tests included with RVS. See the `Benchmark, stress, qualification `_ section of the Instinct documentation for usage instructions. BabelStream test ---------------- BabelStream is a synthetic GPU benchmark based on the STREAM benchmark for CPUs, measuring memory transfer rates to and from global device memory. BabelStream tests are included with the RVS package as part of the `BABEL module `_. For more information, see `Performance benchmarking `_ in the Instinct documentation. RCCL tests ========== The ROCm Communication Collectives Library (RCCL) enables efficient multi-GPU communication. The ``__ suite benchmarks the performance and verifies the correctness of these collective operations. This helps ensure optimal scaling for multi-GPU tasks. 1. To get started, build RCCL-tests using the official instructions in the README at ``__ or use the following commands: .. code-block:: shell git clone https://github.com/ROCm/rccl-tests.git cd rccl-tests make 2. Run the suggested RCCL tests -- see `RCCL benchmarking `_ in the AMD Instinct customer acceptance guide. TransferBench test ================== TransferBench is a standalone utility for benchmarking simultaneous data transfer performance between various devices in the system, including CPU-to-GPU and GPU-to-GPU (peer-to-peer). This helps identify potential bottlenecks in data movement between the host system and the GPUs, or between GPUs, which can impact end-to-end latency. .. _healthcheck-install-transferbench: 1. To get started, use the instructions in the `TransferBench documentation `__ or use the following commands: .. code:: shell git clone https://github.com/ROCm/TransferBench.git cd TransferBench CC=hipcc make 2. Run the suggested TransferBench tests -- see `TransferBench benchmarking `__ in the Instinct performance benchmarking documentation for instructions. --- .. meta:: :description: How to train a model using JAX MaxText for ROCm. :keywords: ROCm, AI, LLM, train, jax, torch, Llama, flux, tutorial, docker ****************************************** Training a model with JAX MaxText on ROCm ****************************************** The MaxText for ROCm training Docker image provides a prebuilt environment for training on AMD Instinct MI355X, MI350X, MI325X, and MI300X GPUs, including essential components like JAX, XLA, ROCm libraries, and MaxText utilities. It includes the following software components: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/jax-maxtext-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for docker in dockers %} {% set jax_version = docker.components["JAX"] %} .. tab-item:: ``{{ docker.pull_tag }}`` :sync: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. note:: The ``rocm/jax-training:maxtext-v25.9`` has been updated to ``rocm/jax-training:maxtext-v25.9.1``. This revision should include a fix to address segmentation fault issues during launch. See the :doc:`versioned documentation `. MaxText with on ROCm provides the following key features to train large language models efficiently: - Transformer Engine (TE) - Flash Attention (FA) 3 -- with or without sequence input packing - GEMM tuning - Multi-node support - NANOO FP8 (for MI300X series GPUs) and FP8 (for MI355X and MI350X) quantization support .. _amd-maxtext-model-support-v25.11: Supported models ================ The following models are pre-optimized for performance on AMD Instinct GPUs. Some instructions, commands, and available training configurations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/jax-maxtext-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Environment setup ================= This Docker image is optimized for specific model configurations outlined as follows. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Pull the Docker image --------------------- Use the following command to pull the Docker image from Docker Hub. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/jax-maxtext-benchmark-models.yaml {% set docker = data.dockers[0] %} .. code-block:: shell docker pull {{ docker.pull_tag }} .. _amd-maxtext-multi-node-setup-v25.11: Multi-node configuration ------------------------ See :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. .. _amd-maxtext-get-started-v25.11: Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/jax-maxtext-benchmark-models.yaml .. _vllm-benchmark-mad: {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: {% if model.mad_tag and "single-node" in model.doc_options %} .. tab-item:: MAD-integrated benchmarking The following run command is tailored to {{ model.model }}. See :ref:`amd-maxtext-model-support-v25.11` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the {{ model.model }} model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/perf.csv/``. {% endif %} .. tab-item:: Standalone benchmarking The following commands are optimized for {{ model.model }}. See :ref:`amd-maxtext-model-support-v25.11` to switch to another available model. Some instructions and resources might not be available for all models and configurations. .. rubric:: Download the Docker image and required scripts Run the JAX MaxText benchmark tool independently by starting the Docker container as shown in the following snippet. .. code-block:: shell docker pull {{ docker.pull_tag }} {% if model.model_repo and "single-node" in model.doc_options %} .. rubric:: Single node training 1. Set up environment variables. .. code-block:: shell export MAD_SECRETS_HFTOKEN= export HF_HOME= ``MAD_SECRETS_HFTOKEN`` is your Hugging Face access token to access models, tokenizers, and data. See `User access tokens `__. ``HF_HOME`` is where ``huggingface_hub`` will store local data. See `huggingface_hub CLI `__. If you already have downloaded or cached Hugging Face artifacts, set this variable to that path. Downloaded files typically get cached to ``~/.cache/huggingface``. 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device=/dev/dri \ --device=/dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ -v $HF_HOME:/hf_cache \ -e HF_HOME=/hf_cache \ -e MAD_SECRETS_HFTOKEN=$MAD_SECRETS_HFTOKEN --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} 3. In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``MAD/scripts/jax-maxtext``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/jax-maxtext 4. Run the setup scripts to install libraries and datasets needed for benchmarking. .. code-block:: shell ./jax-maxtext_benchmark_setup.sh -m {{ model.model_repo }} 5. To run the training benchmark without quantization, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} For quantized training, run the script with the appropriate option for your Instinct GPU. .. tab-set:: .. tab-item:: MI355X and MI350X For ``fp8`` quantized training on MI355X and MI350X GPUs, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} -q fp8 {% if model.model_repo not in ["Llama-3.1-70B", "Llama-3.3-70B"] %} .. tab-item:: MI325X and MI300X For ``nanoo_fp8`` quantized training on MI300X series GPUs, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} -q nanoo_fp8 {% endif %} {% endif %} {% if model.multinode_training_script and "multi-node" in model.doc_options %} .. rubric:: Multi-node training The following examples use SLURM to run on multiple nodes. .. note:: The following scripts will launch the Docker container and run the benchmark. Run them outside of any Docker container. 1. Make sure ``$HF_HOME`` is set before running the test. See `ROCm benchmarking `__ for more details on downloading the Llama models before running the benchmark. 2. To run multi-node training for {{ model.model }}, use the `multi-node training script `__ under the ``scripts/jax-maxtext/gpu-rocm/`` directory. 3. Run the multi-node training benchmark script. .. code-block:: shell sbatch -N {{ model.multinode_training_script }} .. rubric:: Profiling with rocprofv3 If you need to collect a trace and the JAX profiler isn't working, use ``rocprofv3`` provided by the :doc:`ROCprofiler-SDK ` as a workaround. For example: .. code-block:: bash rocprofv3 \ --hip-trace \ --kernel-trace \ --memory-copy-trace \ --rccl-trace \ --output-format pftrace \ -d ./v3_traces \ # output directory -- ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} # or desired command You can set the directory where you want the .json traces to be saved using ``-d ``. The resulting traces can be opened in Perfetto: ``__. {% else %} .. rubric:: Multi-node training For multi-node training examples, choose a model from :ref:`amd-maxtext-model-support-v25.11` with an available `multi-node training script `__. {% endif %} {% endfor %} {% endfor %} Known issues ============ - Minor performance regression (< 4%) for BF16 quantization in Llama models and Mixtral 8x7b. - You might see minor loss spikes, or loss curve may have slightly higher convergence end values compared to the previous ``jax-training`` image. - For FP8 training on MI355, many models will display a warning message like: ``Warning: Latency not found for MI_M=16, MI_N=16, MI_K=128, mi_input_type=BFloat8Float8_fnuz. Returning latency value of 32 (really slow).`` The compile step may take longer than usual, but training will run. This will be fixed in a future release. - The built-in JAX profiler isn't working. If you need to collect a trace and the JAX profiler isn't working, use ``rocprofv3`` provided by the :doc:`ROCprofiler-SDK ` as a workaround. For example: .. code-block:: bash rocprofv3 \ --hip-trace \ --kernel-trace \ --memory-copy-trace \ --rccl-trace \ --output-format pftrace \ -d ./v3_traces \ # output directory -- ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} # or desired command You can set the directory where you want the .json traces to be saved using ``-d ``. The resulting traces can be opened in Perfetto: ``__. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/jax-maxtext-history` to find documentation for previous releases of the ``ROCm/jax-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM on ROCm ****************************************** .. caution:: For a unified training solution on AMD GPUs with ROCm, the `rocm/megatron-lm `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including Megatron-LM and :doc:`torchtitan `. Primus with Megatron is designed to replace this ROCm Megatron-LM training workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`previous-versions/megatron-lm-primus-migration-guide`. The `Megatron-LM framework for ROCm `_ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides ready-to-use Docker images for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/megatron-lm-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-megatron-lm-model-support-v25.11: Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm on AMD Instinct MI300X Series GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v25.11: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training-v25.11: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v25.11: Download the Docker image ------------------------- .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/megatron-lm-benchmark-models.yaml {% set docker = data.docker %} 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash 4. **Megatron-LM backward compatibility setup** -- this Docker is primarily intended for use with Primus, but it maintains Megatron-LM compatibility with limited support. To roll back to using Megatron-LM, follow these steps: .. code-block:: shell cd /workspace/Megatron-LM/ pip uninstall megatron-core pip install -e . The Docker container hosts a verified commit of ``__. .. _amd-megatron-lm-environment-setup-v25.11: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Multi-node configuration ------------------------ Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`amd-megatron-lm-multi-node-examples` for example run commands. .. _amd-megatron-lm-tokenizer-v25.11: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-7B" .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-72B" Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b pyt_megatron_lm_train_llama-3.1-70b-proxy For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b pyt_megatron_lm_train_qwen2.5-72b If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir -p temp/qwen-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/qwen-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. _amd-megatron-lm-run-training-v25.11: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Before starting training, export the following environment variables. .. tab-set:: .. tab-item:: MI355X and MI350X .. code-block:: shell export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NVTE_CK_USES_BWD_V3=1 .. tab-item:: MI325X and MI300X .. code-block:: shell export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NVTE_CK_USES_BWD_V3=1 # Set this on MI325X/MI300X only export NVTE_CK_IS_V3_ATOMIC_FP32=1 Single node training -------------------- .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.3-70B-Instruct \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MBS=2 \ BS=16 \ TE_FP8=0 \ TP=1 \ PP=1 \ FSDP=1 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=512 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=10 \ GEMM_TUNING=0 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=512 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=10 \ GEMM_TUNING=1 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. To run the training on a single node for Llama 3.1 70B FP8, use the following command. .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell FP8_WEIGHT_TRANSPOSE_CACHE=0 \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ NUM_LAYERS=40 \ bash examples/llama/train_llama3.sh .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=7 \ BS=56 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 FORCE_BALANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 GEMM_TUNING=1 \ PR=bf16 \ MBS=4 \ AC=none \ SEQ_LEN=4096 \ PAD_LEN=4096 \ TRAIN_ITERS=20 \ bash examples/deepseek_v2/train_deepseekv2.sh .. note:: Note that DeepSeek-V2-Lite is experiencing instability due to GPU memory access fault for large iterations. For stability, it's recommended to use Primus for this workload. See :doc:`primus-megatron`. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=0 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=none \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=4096 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x7B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=4 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=full \ NUM_LAYERS=4 \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=8192 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x22B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=0 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B For FP8, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=1 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ FSDP=1 \ CP=1 \ PP=1 \ MBS=3 \ BS=24 \ TE_FP8=0 \ MODEL_SIZE=72 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-72B \ RECOMPUTE_ACTIVATIONS=full \ CKPT_FORMAT=torch_dist .. _amd-megatron-lm-multi-node-examples-v25.11: Multi-node training examples ---------------------------- To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=0 \ bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=1 \ bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars-v25.11: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Previous versions ================= See :doc:`previous-versions/megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- .. meta:: :description: How to train a model using LLM Foundry for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ****************************************** Training MPT-30B with LLM Foundry on ROCm ****************************************** MPT-30B is a 30-billion parameter decoder-style transformer-based model from the Mosaic Pretrained Transformer (MPT) family -- learn more about it in MosaicML's research blog `MPT-30B: Raising the bar for open-source foundation models `_. ROCm and ``__ provide a pre-configured training environment for the MPT-30B model using the ``rocm/pytorch-training:v25.5`` base `Docker image `_ and the `LLM Foundry `_ framework. This environment packages the following software components to train on AMD Instinct MI300X Series GPUs: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.4 | +--------------------------+--------------------------------+ | PyTorch | 2.7.0a0+git6374332 | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0.post1 | +--------------------------+--------------------------------+ Using this image, you can build, run, and test the training process for MPT-30B with access to detailed logs and performance metrics. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Getting started =============== The following procedures help you set up the training environment in a reproducible Docker container. This training environment is tailored for training MPT-30B using LLM Foundry and the specific model configurations outlined. Other configurations and run conditions outside those described in this document are not validated. .. tab-set:: .. tab-item:: MAD-integrated benchmarking On your host machine, clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt Use this command to initiate the MPT-30B training benchmark. .. code-block:: shell madengine run \ --tags pyt_mpt30b_training \ --keep-model-dir \ --live-output \ --clean-docker-cache .. tip:: If you experience data download failures, set the ``MAD_SECRETS_HFTOKEN`` variable to your Hugging Face access token. See `User access tokens `_ for details. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" .. note:: For improved performance (training throughput), consider enabling TunableOp. By default, ``pyt_mpt30b_training`` runs with TunableOp disabled. To enable it, run ``madengine run`` with the ``--tunableop on`` argument or edit the ``models.json`` configuration before running training. Although this might increase the initial training time, it can result in a performance gain. .. tab-item:: Standalone benchmarking To set up the training environment, clone the ``__ repo and build the Docker image. In this snippet, the image is named ``mosaic_mpt30_image``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD docker build --build-arg MAD_SYSTEM_GPU_ARCHITECTURE=gfx942 -f docker/pyt_mpt30b_training.ubuntu.amd.Dockerfile -t mosaic_mpt30_image . Start a ``mosaic_mpt30_image`` container using the following command. .. code-block:: shell docker run -it --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --shm-size=8G mosaic_mpt30_image In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory at ``/workspace/MAD/scripts/pyt_mpt30b_training``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pyt_mpt30b_training To initiate the training process, use the following command. This script uses the hyperparameters defined in ``mpt-30b-instruct.yaml``. .. code-block:: shell source run.sh .. note:: For improved performance (training throughput), consider enabling TunableOp. To enable it, add the ``--tunableop on`` flag. .. code-block:: shell source run.sh --tunableop on Although this might increase the initial training time, it can result in a performance gain. Interpreting the output ======================= The training output will be displayed in the terminal and simultaneously saved to the ``output.txt`` file in the current directory. Key performance metrics will also be extracted and appended to the ``perf_pyt_mpt30b_training.csv`` file. Key performance metrics include: - Training logs: Real-time display of loss metrics, accuracy, and training progress. - Model checkpoints: Periodically saved model snapshots for potential resume or evaluation. - Performance metrics: Detailed summaries of training speed and training loss metrics. - Performance (throughput/samples_per_sec) Overall throughput, measuring the total samples processed per second. Higher values indicate better hardware utilization. - Performance per device (throughput/samples_per_sec) Throughput on a per-device basis, showing how each GPU or CPU is performing. - Language Cross Entropy (metrics/train/LanguageCrossEntropy) Measures prediction accuracy. Lower cross entropy suggests the model’s output is closer to the expected distribution. - Training loss (loss/train/total) Overall training loss. A decreasing trend indicates the model is learning effectively. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. --- :orphan: ******************************************************** JAX MaxText training performance testing version history ******************************************************** This table lists previous versions of the ROCm JAX MaxText Docker image for training performance testing. For detailed information about available models for benchmarking, see the version-specific documentation. You can find tagged previous releases of the ``ROCm/jax-training`` Docker image on `Docker Hub `_. .. list-table:: :header-rows: 1 * - Image version - Components - Resources * - 25.11 - * ROCm 7.1.0 * JAX 0.7.1 - * :doc:`Documentation <../jax-maxtext>` * `Docker Hub `__ * - 25.9.1 - * ROCm 7.0.0 * JAX 0.6.2 - * :doc:`Documentation ` * `Docker Hub (25.9.1) `__ * `Docker Hub (25.9) `__ * - 25.7 - * ROCm 6.4.1 * JAX 0.6.0, 0.5.0 - * :doc:`Documentation ` * `Docker Hub (JAX 0.6.0) `__ * `Docker Hub (JAX 0.5.0) `__ * - 25.5 - * ROCm 6.3.4 * JAX 0.4.35 - * :doc:`Documentation ` * `Docker Hub `__ * - 25.4 - * ROCm 6.3.0 * JAX 0.4.31 - * :doc:`Documentation ` * `Docker Hub `__ --- :orphan: .. meta:: :description: How to train a model using JAX MaxText for ROCm. :keywords: ROCm, AI, LLM, train, jax, torch, Llama, flux, tutorial, docker ************************************** Training a model with MaxText for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm JAX MaxText training performance documentation. See :doc:`../jax-maxtext` for the latest version. MaxText is a high-performance, open-source framework built on the Google JAX machine learning library to train LLMs at scale. The MaxText framework for ROCm is an optimized fork of the upstream ``__ enabling efficient AI workloads on AMD MI300X Series GPUs. The MaxText for ROCm training Docker (``rocm/jax-training:maxtext-v25.4``) image provides a prebuilt environment for training on AMD Instinct MI300X and MI325X GPUs, including essential components like JAX, XLA, ROCm libraries, and MaxText utilities. It includes the following software components: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.0 | +--------------------------+--------------------------------+ | JAX | 0.4.31 | +--------------------------+--------------------------------+ | Python | 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.12.0.dev0+f81a3eb | +--------------------------+--------------------------------+ | hipBLASLt | git78ec8622 | +--------------------------+--------------------------------+ Supported features and models ============================= MaxText provides the following key features to train large language models efficiently: - Transformer Engine (TE) - Flash Attention (FA) 3 - GEMM tuning - Multi-node support .. _amd-maxtext-model-support-v254: The following models are pre-optimized for performance on AMD Instinct MI300X Series GPUs. * Llama 3.1 8B * Llama 3.1 70B * Llama 3 8B * Llama 3 70B * Llama 2 7B * Llama 2 70B * DeepSeek-V2-Lite .. note:: Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). Unsupported features -------------------- Currently, MaxText's default packed input format is not supported. Using this format with the current Docker image results in incorrect attention calculations across different input sequences. Support for packed input format is planned for a future release. System validation ================= If you have already validated your system settings, including NUMA auto-balancing, skip this step. Otherwise, complete the :ref:`system validation and optimization steps ` to set up your system before starting training. Environment setup ================= This Docker image is optimized for specific model configurations outlined as follows. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. .. _amd-maxtext-multi-node-setup-v254: Multi-node setup ---------------- For multi-node environments, ensure you have all the necessary packages for your network device, such as, RDMA. If you're not using a multi-node setup with RDMA, skip ahead to :ref:`amd-maxtext-download-docker-v254`. 1. Install the following packages to build and install the RDMA driver. .. code-block:: shell sudo apt install iproute2 -y sudo apt install -y linux-headers-"$(uname-r)" libelf-dev sudo apt install -y gcc make libtool autoconf librdmacm-dev rdmacm-utils infiniband-diags ibverbs-utils perftest ethtool libibverbs-dev rdma-core strace libibmad5 libibnetdisc5 ibverbs-providers libibumad-dev libibumad3 libibverbs1 libnl-3-dev libnl-route-3-dev Refer to your NIC manufacturer's documentation for further steps on compiling and installing the RoCE driver. For example, for Broadcom, see `Compiling Broadcom NIC software from source `_ in `Ethernet networking guide for AMD Instinct MI300X GPU clusters `_. 2. Set the following environment variables. a. Master address Change `localhost` to the master node's resolvable hostname or IP address: .. code-block:: bash export MASTER_ADDR="${MASTER_ADDR:-localhost}" b. Number of nodes Set the number of nodes you want to train on (for example, ``2``, ``4``, or ``8``): .. code-block:: bash export NNODES="${NNODES:-1}" c. Node ranks Set the rank of each node (``0`` for master, ``1`` for the first worker node, and so on) Node ranks should be unique across all nodes in the cluster. .. code-block:: bash export NODE_RANK="${NODE_RANK:-0}" d. Network interface Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface with an IP address in the same subnet as your other nodes. Then, update the following variable in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 This variable specifies which network interface to use for inter-node communication. Setting this variable to the incorrect interface can result in communication failures or significantly reduced performance. e. RDMA interface Ensure the :ref:`required packages ` are installed on all nodes. Then, set the RDMA interfaces to use for communication. .. code-block:: bash # If using Broadcom NIC export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 # If using Mellanox NIC export NCCL_IB_HCA=mlx5_0,mlx5_1,mlx5_2,mlx5_3,mlx5_4,mlx5_5,mlx5_8,mlx5_9 .. _amd-maxtext-download-docker-v254: Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/jax-training:maxtext-v25.4 2. Run the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME/.ssh:/root/.ssh --shm-size 128G --name maxtext_training rocm/jax-training:maxtext-v25.4 .. _amd-maxtext-get-started-v254: Getting started =============== The following examples demonstrate how to get started with single node and multi-node training using the benchmarking scripts provided at ``__. .. important:: The provided scripts launch a Docker container and execute a benchmark. Ensure you run these commands outside of any existing Docker container. Before running any benchmarks, ensure the ``$HF_HOME`` environment variable is set correctly and points to your Hugging Face cache directory. Single node training benchmarking examples ------------------------------------------ * Example 1: Single node training with Llama 2 7B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_7b.sh Run the single node training benchmark: IMAGE="rocm/jax-training:maxtext-v25.4" bash ./llama2_7b.sh * Example 2: Single node training with Llama 2 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_70b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.4" bash ./llama2_70b.sh * Example 3: Single node training with Llama 3 8B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_8b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.4" bash ./llama3_8b.sh * Example 4: Single node training with Llama 3 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_70b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.4" bash ./llama3_70b.sh * Example 5: Single node training with DeepSeek V2 16B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/deepseek_v2_16b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.4" bash ./deepseek_v2_16b.sh .. note:: The reported TFLOP/s by MaxText for DeepSeek is not accurate. Use the tokens/s as a performance indicator. Multi-node training benchmarking examples ----------------------------------------- The following examples use SLURM for running on multiple nodes -- the commands might need to be adjusted for your own cluster setup. * Example 1: Multi-node training with Llama 2 7B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_7b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama2_7b_multinode.sh * Example 2: Multi-node training with Llama 2 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_70b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama2_70b_multinode.sh * Example 3: Multi-node training with Llama 3 8B model Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_8b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama3_8b_multinode.sh * Example 4: Multi-node training with Llama 3 70B model Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_70b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama3_70b_multinode.sh Previous versions ================= See :doc:`jax-maxtext-history` to find documentation for previous releases of the ``ROCm/jax-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using JAX MaxText for ROCm. :keywords: ROCm, AI, LLM, train, jax, torch, Llama, flux, tutorial, docker ************************************** Training a model with MaxText for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm JAX MaxText training performance documentation. See :doc:`../jax-maxtext` for the latest version. MaxText is a high-performance, open-source framework built on the Google JAX machine learning library to train LLMs at scale. The MaxText framework for ROCm is an optimized fork of the upstream ``__ enabling efficient AI workloads on AMD MI300X Series GPUs. The MaxText for ROCm training Docker (``rocm/jax-training:maxtext-v25.5``) image provides a prebuilt environment for training on AMD Instinct MI300X and MI325X GPUs, including essential components like JAX, XLA, ROCm libraries, and MaxText utilities. It includes the following software components: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.4 | +--------------------------+--------------------------------+ | JAX | 0.4.35 | +--------------------------+--------------------------------+ | Python | 3.10.12 | +--------------------------+--------------------------------+ | Transformer Engine | 1.12.0.dev0+b8b92dc | +--------------------------+--------------------------------+ | hipBLASLt | 0.13.0-ae9c477a | +--------------------------+--------------------------------+ Supported features and models ============================= MaxText provides the following key features to train large language models efficiently: - Transformer Engine (TE) - Flash Attention (FA) 3 - GEMM tuning - Multi-node support .. _amd-maxtext-model-support-v255: The following models are pre-optimized for performance on AMD Instinct MI300X Series GPUs. * Llama 3.3 70B * Llama 3.1 8B * Llama 3.1 70B * Llama 3 8B * Llama 3 70B * Llama 2 7B * Llama 2 70B * DeepSeek-V2-Lite .. note:: Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). Unsupported features -------------------- Currently, MaxText's default packed input format is not supported. Using this format with the current Docker image results in incorrect attention calculations across different input sequences. Support for packed input format is planned for a future release. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Environment setup ================= This Docker image is optimized for specific model configurations outlined as follows. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. .. _amd-maxtext-multi-node-setup-v255: Multi-node setup ---------------- For multi-node environments, ensure you have all the necessary packages for your network device, such as, RDMA. If you're not using a multi-node setup with RDMA, skip ahead to :ref:`amd-maxtext-download-docker-v255`. 1. Install the following packages to build and install the RDMA driver. .. code-block:: shell sudo apt install iproute2 -y sudo apt install -y linux-headers-"$(uname-r)" libelf-dev sudo apt install -y gcc make libtool autoconf librdmacm-dev rdmacm-utils infiniband-diags ibverbs-utils perftest ethtool libibverbs-dev rdma-core strace libibmad5 libibnetdisc5 ibverbs-providers libibumad-dev libibumad3 libibverbs1 libnl-3-dev libnl-route-3-dev Refer to your NIC manufacturer's documentation for further steps on compiling and installing the RoCE driver. For example, for Broadcom, see `Compiling Broadcom NIC software from source `_ in `Ethernet networking guide for AMD Instinct MI300X GPU clusters `_. 2. Set the following environment variables. a. Master address Change ``localhost`` to the master node's resolvable hostname or IP address: .. code-block:: bash export MASTER_ADDR="${MASTER_ADDR:-localhost}" b. Number of nodes Set the number of nodes you want to train on (for example, ``2``, ``4``, or ``8``): .. code-block:: bash export NNODES="${NNODES:-1}" c. Node ranks Set the rank of each node (``0`` for master, ``1`` for the first worker node, and so on) Node ranks should be unique across all nodes in the cluster. .. code-block:: bash export NODE_RANK="${NODE_RANK:-0}" d. Network interface Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface with an IP address in the same subnet as your other nodes. Then, update the following variable in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 This variable specifies which network interface to use for inter-node communication. Setting this variable to the incorrect interface can result in communication failures or significantly reduced performance. e. RDMA interface Ensure the :ref:`required packages ` are installed on all nodes. Then, set the RDMA interfaces to use for communication. .. code-block:: bash # If using Broadcom NIC export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 # If using Mellanox NIC export NCCL_IB_HCA=mlx5_0,mlx5_1,mlx5_2,mlx5_3,mlx5_4,mlx5_5,mlx5_8,mlx5_9 .. _amd-maxtext-download-docker-v255: Pull the Docker image --------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/jax-training:maxtext-v25.5 2. Use the following command to launch the Docker container. Note that the benchmarking scripts used in the :ref:`following section ` automatically launch the Docker container and execute the benchmark. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME/.ssh:/root/.ssh --shm-size 128G --name maxtext_training rocm/jax-training:maxtext-v25.5 .. _amd-maxtext-get-started-v255: Getting started =============== The following examples demonstrate how to get started with single node and multi-node training using the benchmarking scripts provided at ``__. .. important:: The provided scripts launch a Docker container and execute a benchmark. Ensure you run these commands outside of any existing Docker container. Before running any benchmarks, ensure the ``$HF_HOME`` environment variable is set correctly and points to your Hugging Face cache directory. Single node training benchmarking examples ------------------------------------------ * Example 1: Single node training with Llama 2 7B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_7b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.5" bash ./llama2_7b.sh * Example 2: Single node training with Llama 2 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_70b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.5" bash ./llama2_70b.sh * Example 3: Single node training with Llama 3 8B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_8b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.5" bash ./llama3_8b.sh * Example 4: Single node training with Llama 3 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_70b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.5" bash ./llama3_70b.sh * Example 5: Single node training with Llama 3.3 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3.3_70b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.5" bash ./llama3.3_70b.sh * Example 6: Single node training with DeepSeek V2 16B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/deepseek_v2_16b.sh Run the single node training benchmark: .. code-block:: shell IMAGE="rocm/jax-training:maxtext-v25.5" bash ./deepseek_v2_16b.sh .. note:: The reported TFLOP/s by MaxText for DeepSeek is not accurate. Use the tokens/s as a performance indicator. Multi-node training benchmarking examples ----------------------------------------- The following examples use SLURM for running on multiple nodes -- the commands might need to be adjusted for your own cluster setup. * Example 1: Multi-node training with Llama 2 7B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_7b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama2_7b_multinode.sh * Example 2: Multi-node training with Llama 2 70B Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama2_70b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama2_70b_multinode.sh * Example 3: Multi-node training with Llama 3 8B model Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_8b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama3_8b_multinode.sh * Example 4: Multi-node training with Llama 3 70B model Download the benchmarking script: .. code-block:: shell wget https://raw.githubusercontent.com/ROCm/maxtext/refs/heads/main/benchmarks/gpu-rocm/llama3_70b_multinode.sh Run the multi-node training benchmark. For example: .. code-block:: shell sbatch -N llama3_70b_multinode.sh Previous versions ================= See :doc:`jax-maxtext-history` to find documentation for previous releases of the ``ROCm/jax-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using JAX MaxText for ROCm. :keywords: ROCm, AI, LLM, train, jax, torch, Llama, flux, tutorial, docker ****************************************** Training a model with JAX MaxText on ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm JAX MaxText training performance documentation. See :doc:`../jax-maxtext` for the latest version. MaxText is a high-performance, open-source framework built on the Google JAX machine learning library to train LLMs at scale. The MaxText framework for ROCm is an optimized fork of the upstream ``__ enabling efficient AI workloads on AMD MI300X series GPUs. The MaxText for ROCm training Docker image provides a prebuilt environment for training on AMD Instinct MI300X and MI325X GPUs, including essential components like JAX, XLA, ROCm libraries, and MaxText utilities. It includes the following software components: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for docker in dockers %} {% set jax_version = docker.components["JAX"] %} .. tab-item:: ``{{ docker.pull_tag }}`` :sync: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% if jax_version == "0.6.0" %} .. note:: Shardy is a new config in JAX 0.6.0. You might get related errors if it's not configured correctly. For now you can turn it off by setting ``shardy=False`` during the training run. You can also follow the `migration guide `__ to enable it. {% endif %} {% endfor %} MaxText with on ROCm provides the following key features to train large language models efficiently: - Transformer Engine (TE) - Flash Attention (FA) 3 -- with or without sequence input packing - GEMM tuning - Multi-node support - NANOO FP8 quantization support .. _amd-maxtext-model-support-v257: Supported models ================ The following models are pre-optimized for performance on AMD Instinct MI300 series GPUs. Some instructions, commands, and available training configurations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.7-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Environment setup ================= This Docker image is optimized for specific model configurations outlined as follows. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Pull the Docker image --------------------- Use the following command to pull the Docker image from Docker Hub. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for docker in dockers %} {% set jax_version = docker.components["JAX"] %} .. tab-item:: JAX {{ jax_version }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} .. _amd-maxtext-multi-node-setup-v257: Multi-node configuration ------------------------ See :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. .. _amd-maxtext-get-started-v257: Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.7-benchmark-models.yaml .. _vllm-benchmark-mad: {% set dockers = data.dockers %} {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: {% if model.mad_tag and "single-node" in model.doc_options %} .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the {{ model.model }} model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/perf.csv/``. {% endif %} .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required scripts Run the JAX MaxText benchmark tool independently by starting the Docker container as shown in the following snippet. .. tab-set:: {% for docker in dockers %} {% set jax_version = docker.components["JAX"] %} .. tab-item:: JAX {{ jax_version }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} {% if model.model_repo and "single-node" in model.doc_options %} .. rubric:: Single node training 1. Set up environment variables. .. code-block:: shell export MAD_SECRETS_HFTOKEN= export HF_HOME= ``MAD_SECRETS_HFTOKEN`` is your Hugging Face access token to access models, tokenizers, and data. See `User access tokens `__. ``HF_HOME`` is where ``huggingface_hub`` will store local data. See `huggingface_hub CLI `__. If you already have downloaded or cached Hugging Face artifacts, set this variable to that path. Downloaded files typically get cached to ``~/.cache/huggingface``. 2. Launch the Docker container. .. tab-set:: {% for docker in dockers %} {% set jax_version = docker.components["JAX"] %} .. tab-item:: JAX {{ jax_version }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker run -it \ --device=/dev/dri \ --device=/dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ -v $HF_HOME:/hf_cache \ -e HF_HOME=/hf_cache \ -e MAD_SECRETS_HFTOKEN=$MAD_SECRETS_HFTOKEN --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} {% endfor %} 3. In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``MAD/scripts/jax-maxtext``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/jax-maxtext 4. Run the setup scripts to install libraries and datasets needed for benchmarking. .. code-block:: shell ./jax-maxtext_benchmark_setup.sh -m {{ model.model_repo }} 5. To run the training benchmark without quantization, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} For quantized training, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} -q nanoo_fp8 {% endif %} {% if model.multinode_training_script and "multi-node" in model.doc_options %} .. rubric:: Multi-node training The following examples use SLURM to run on multiple nodes. .. note:: The following scripts will launch the Docker container and run the benchmark. Run them outside of any Docker container. 1. Make sure ``$HF_HOME`` is set before running the test. See `ROCm benchmarking `__ for more details on downloading the Llama models before running the benchmark. 2. To run multi-node training for {{ model.model }}, use the `multi-node training script `__ under the ``scripts/jax-maxtext/gpu-rocm/`` directory. 3. Run the multi-node training benchmark script. .. code-block:: shell sbatch -N {{ model.multinode_training_script }} {% else %} .. rubric:: Multi-node training For multi-node training examples, choose a model from :ref:`amd-maxtext-model-support-v257` with an available `multi-node training script `__. {% endif %} {% endfor %} {% endfor %} Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`jax-maxtext-history` to find documentation for previous releases of the ``ROCm/jax-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using JAX MaxText for ROCm. :keywords: ROCm, AI, LLM, train, jax, torch, Llama, flux, tutorial, docker ****************************************** Training a model with JAX MaxText on ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm JAX MaxText training performance documentation. See :doc:`../jax-maxtext` for the latest version. .. note:: We have refreshed the ``rocm/jax-training:maxtext-v25.9`` image as `rocm/jax-training:maxtext-v25.9.1`. This should include a fix to address segmentation fault issues during launch. The MaxText for ROCm training Docker image provides a prebuilt environment for training on AMD Instinct MI355X, MI350X, MI325X, and MI300X GPUs, including essential components like JAX, XLA, ROCm libraries, and MaxText utilities. It includes the following software components: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for docker in dockers %} {% set jax_version = docker.components["JAX"] %} .. tab-item:: ``{{ docker.pull_tag }}`` :sync: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% if jax_version == "0.6.0" %} .. note:: Shardy is a new config in JAX 0.6.0. You might get related errors if it's not configured correctly. For now you can turn it off by setting ``shardy=False`` during the training run. You can also follow the `migration guide `__ to enable it. {% endif %} {% endfor %} MaxText with on ROCm provides the following key features to train large language models efficiently: - Transformer Engine (TE) - Flash Attention (FA) 3 -- with or without sequence input packing - GEMM tuning - Multi-node support - NANOO FP8 (for MI300X series GPUs) and FP8 (for MI355X and MI350X) quantization support .. _amd-maxtext-model-support-v259: Supported models ================ The following models are pre-optimized for performance on AMD Instinct GPUs. Some instructions, commands, and available training configurations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.9-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. Environment setup ================= This Docker image is optimized for specific model configurations outlined as follows. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Pull the Docker image --------------------- Use the following command to pull the Docker image from Docker Hub. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.9-benchmark-models.yaml {% set docker = data.dockers[0] %} .. code-block:: shell docker pull {{ docker.pull_tag }} .. _amd-maxtext-multi-node-setup-v259: Multi-node configuration ------------------------ See :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. .. _amd-maxtext-get-started-v259: Benchmarking ============ Once the setup is complete, choose between two options to reproduce the benchmark results: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/jax-maxtext-v25.9-benchmark-models.yaml .. _vllm-benchmark-mad: {% set docker = data.dockers[0] %} {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{model.mad_tag}} .. tab-set:: {% if model.mad_tag and "single-node" in model.doc_options %} .. tab-item:: MAD-integrated benchmarking The following run command is tailored to {{ model.model }}. See :ref:`amd-maxtext-model-support-v259` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. Use this command to run the performance benchmark test on the {{ model.model }} model using one GPU with the :literal:`{{model.precision}}` data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{model.mad_tag}} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the model are collected in the following path: ``~/MAD/perf.csv/``. {% endif %} .. tab-item:: Standalone benchmarking The following commands are optimized for {{ model.model }}. See :ref:`amd-maxtext-model-support-v259` to switch to another available model. Some instructions and resources might not be available for all models and configurations. .. rubric:: Download the Docker image and required scripts Run the JAX MaxText benchmark tool independently by starting the Docker container as shown in the following snippet. .. code-block:: shell docker pull {{ docker.pull_tag }} {% if model.model_repo and "single-node" in model.doc_options %} .. rubric:: Single node training 1. Set up environment variables. .. code-block:: shell export MAD_SECRETS_HFTOKEN= export HF_HOME= ``MAD_SECRETS_HFTOKEN`` is your Hugging Face access token to access models, tokenizers, and data. See `User access tokens `__. ``HF_HOME`` is where ``huggingface_hub`` will store local data. See `huggingface_hub CLI `__. If you already have downloaded or cached Hugging Face artifacts, set this variable to that path. Downloaded files typically get cached to ``~/.cache/huggingface``. 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device=/dev/dri \ --device=/dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ -v $HF_HOME:/hf_cache \ -e HF_HOME=/hf_cache \ -e MAD_SECRETS_HFTOKEN=$MAD_SECRETS_HFTOKEN --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} 3. In the Docker container, clone the ROCm MAD repository and navigate to the benchmark scripts directory at ``MAD/scripts/jax-maxtext``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/jax-maxtext 4. Run the setup scripts to install libraries and datasets needed for benchmarking. .. code-block:: shell ./jax-maxtext_benchmark_setup.sh -m {{ model.model_repo }} 5. To run the training benchmark without quantization, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} For quantized training, run the script with the appropriate option for your Instinct GPU. .. tab-set:: .. tab-item:: MI355X and MI350X For ``fp8`` quantized training on MI355X and MI350X GPUs, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} -q fp8 {% if model.model_repo not in ["Llama-3.1-70B", "Llama-3.3-70B"] %} .. tab-item:: MI325X and MI300X For ``nanoo_fp8`` quantized training on MI300X series GPUs, use the following command: .. code-block:: shell ./jax-maxtext_benchmark_report.sh -m {{ model.model_repo }} -q nanoo_fp8 {% endif %} {% endif %} {% if model.multinode_training_script and "multi-node" in model.doc_options %} .. rubric:: Multi-node training The following examples use SLURM to run on multiple nodes. .. note:: The following scripts will launch the Docker container and run the benchmark. Run them outside of any Docker container. 1. Make sure ``$HF_HOME`` is set before running the test. See `ROCm benchmarking `__ for more details on downloading the Llama models before running the benchmark. 2. To run multi-node training for {{ model.model }}, use the `multi-node training script `__ under the ``scripts/jax-maxtext/gpu-rocm/`` directory. 3. Run the multi-node training benchmark script. .. code-block:: shell sbatch -N {{ model.multinode_training_script }} {% else %} .. rubric:: Multi-node training For multi-node training examples, choose a model from :ref:`amd-maxtext-model-support-v259` with an available `multi-node training script `__. {% endif %} {% endfor %} {% endfor %} Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`jax-maxtext-history` to find documentation for previous releases of the ``ROCm/jax-training`` Docker image. --- :orphan: ******************************************************** Megatron-LM training performance testing version history ******************************************************** This table lists previous versions of the ROCm Megatron-LM training Docker image for inference performance testing. For detailed information about available models for benchmarking, see the version-specific documentation. You can find tagged previous releases of the ``ROCm/megatron-lm`` Docker image on `Docker Hub `__. .. list-table:: :header-rows: 1 * - Image version - Components - Resources * - v25.11 - * ROCm 7.1.0 * PyTorch 2.10.0.dev20251112+rocm7.1 - * :doc:`Primus Megatron documentation <../primus-megatron>` * :doc:`Megatron-LM (legacy) documentation <../megatron-lm>` * `Docker Hub `__ * - v25.10 - * ROCm 7.1.0 * PyTorch 2.10.0.dev20251112+rocm7.1 - * :doc:`Primus Megatron documentation ` * :doc:`Megatron-LM (legacy) documentation ` * `Docker Hub `__ * - v25.9 - * ROCm 7.0.0 * Primus 0.3.0 * PyTorch 2.9.0.dev20250821+rocm7.0.0.lw.git125803b7 - * :doc:`Primus Megatron documentation ` * :doc:`Megatron-LM (legacy) documentation ` * `Docker Hub (gfx950) `__ * `Docker Hub (gfx942) `__ * - v25.8 - * ROCm 6.4.3 * PyTorch 2.8.0a0+gitd06a406 - * :doc:`Primus Megatron documentation ` * :doc:`Megatron-LM (legacy) documentation ` * `Docker Hub (py310) `__ * - v25.7 - * ROCm 6.4.2 * PyTorch 2.8.0a0+gitd06a406 - * :doc:`Primus Megatron documentation ` * :doc:`Megatron-LM (legacy) documentation ` * `Docker Hub (py310) `__ * - v25.6 - * ROCm 6.4.1 * PyTorch 2.8.0a0+git7d205b2 - * :doc:`Documentation ` * `Docker Hub (py312) `__ * `Docker Hub (py310) `__ * - v25.5 - * ROCm 6.3.4 * PyTorch 2.8.0a0+gite2f9759 - * :doc:`Documentation ` * `Docker Hub (py312) `__ * `Docker Hub (py310) `__ * - v25.4 - * ROCm 6.3.0 * PyTorch 2.7.0a0+git637433 - * :doc:`Documentation ` * `Docker Hub `__ * - v25.3 - * ROCm 6.3.0 * PyTorch 2.7.0a0+git637433 - * :doc:`Documentation ` * `Docker Hub `__ * - v24.12-dev - * ROCm 6.1.0 * PyTorch 2.4.0 - * :doc:`Documentation ` * `Docker Hub `__ --- :orphan: ***************************************************************** Migrating workloads to Primus (Megatron backend) from Megatron-LM ***************************************************************** Primus supports Megatron-Core as backend optimization library, replacing ROCm Megatron-LM. This document outlines the steps to migrate workload from ROCm Megatron-LM to Primus with the Megatron backend. Model architecture ================== ROCm Megatron-LM defines model architecture parameters in the training scripts; for example, the Llama 3 8B model parameters are defined in `examples/llama/train_llama3.sh `__ as shown below: .. code-block:: bash HIDDEN_SIZE=4096 FFN_HIDDEN_SIZE=14336 NUM_LAYERS=32 NUM_HEADS=32 NUM_KV_HEADS=8 Primus defines the model architecture through model YAML configuration files inside the ``primus/configs/models/megatron/`` repository. For example, Llama 3 8B model architecture parameters are defined in `primus/configs/models/megatron/llama3_8B.yaml `__ as shown below: .. code-block:: yaml bases: - llama3_base.yaml tokenizer_type: Llama3Tokenizer tokenizer_model: meta-llama/Llama-3.1-8B ffn_hidden_size: 14336 hidden_size: 4096 num_attention_heads: 32 num_layers: 32 num_query_groups: 8 Primus' model config files follow a hierarchical design, meaning that new model config YAMLs can inherit existing model config files by importing them as bases. For example, `llama3.1_8B.yaml `__ uses ``llama3_8B.yaml`` as a base config and overrides few parameters, as shown below. In this example, ``llama3.1_8B`` overrides the ``max_position_embeddings`` value: .. code-block:: yaml bases: - llama3_8B.yaml tokenizer_type: Llama3Tokenizer tokenizer_model: meta-llama/Llama-3.1-8B max_position_embeddings: 131072 .. tip:: Primus provides ``llama_base.yaml`` as the base configuration, which can be used as bases for additional model architectures. For example, `mixtral_base.yaml `__ and `deepseek_v3_base.yaml `__ define ``llama_base.yaml`` as its base. .. code-block:: yaml # Example mixtral_base.yaml: bases: - llama_base.yaml init_method_std: 0.01 rotary_base: 1000000 qk_layernorm: false group_query_attention: true num_query_groups: 8 # moe parameters num_experts: 8 moe_router_topk: 2 moe_router_load_balancing_type: aux_loss moe_aux_loss_coeff: 1e-2 moe_grouped_gemm: true moe_token_dispatcher_type: alltoall It is recommended to add a new ``${MODEL_NAME}_base.yaml`` to add a new category of model and define new models on top of it. For example, to add Qwen2.5 models in Primus, we define `qwen2.5_base.yaml `__ and build `qwen2.5_7B.yaml `__ and `qwen2.5_72B.yaml `__ using ``qwen2.5_base.yaml`` as the base config. Training parameters =================== ROCm Megatron-LM also defines the training parameters, like batch size, tensor-parallelism, precision, as so on, in the training scripts. For example, Llama3 8B model parameters are defined in `examples/llama/train_llama3.sh `__ as shown below: .. code-block:: bash TP="${TP:-8}" PP="${PP:-1}" CP="${CP:-1}" MBS="${MBS:-1}" BS="${BS:-8}" Primus defines the training parameters in top-level YAML files -- see `examples/megatron/configs/ `__. For example, the `llama3.1_8B-pretrain.yaml `__ configuration imports the ``llama3.1_8B.yaml`` model architecture file. Users can then override the default training parameters in ``llama3.1_8B-pretrain.yaml``. .. code-block:: yaml # model to run model: llama3.1_8B.yaml # Model architecture yaml overrides: # log # disable_wandb: false # disable_tensorboard: false stderr_sink_level: DEBUG log_avg_skip_iterations: 2 log_avg_reset_interval: 50 train_iters: 50 micro_batch_size: 2 global_batch_size: 128 seq_length: 8192 max_position_embeddings: 8192 lr: 1.0e-5 min_lr: 0.0 lr_warmup_iters: 2 lr_decay_iters: null lr_decay_style: cosine weight_decay: 0.1 adam_beta1: 0.9 adam_beta2: 0.95 eod_mask_loss: true init_method_std: 0.008 norm_epsilon: 1.0e-6 Backward compatibility with Megatron-LM ======================================= The Dockerized environment used for Primus maintains compatibility with Megatron-LM with limited support. To roll back to using Megatron-LM, follow these steps. .. code-block:: shell cd /workspace/Megatron-LM/ pip uninstall megatron-core pip install -e . Once Megatron-LM is installed, follow :doc:`the documentation <../megatron-lm>` to run workloads as usual. --- :orphan: .. meta:: :description: How to train a model using ROCm Megatron-LM :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ************************************** Training a model with ROCm Megatron-LM ************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. .. _amd-megatron-lm: The ROCm Megatron-LM framework is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X GPUs, AMD Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to :ref:`support models ` like Meta's Llama 2, Llama 3, and Llama 3.1, enabling developers to train next-generation AI models with greater efficiency. See the GitHub repository at ``__. For ease of use, AMD provides a ready-to-use Docker image for MI300X GPUs containing essential components, including PyTorch, PyTorch Lightning, ROCm libraries, and Megatron-LM utilities. It contains the following software to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.1 | +--------------------------+--------------------------------+ | PyTorch | 2.4.0 | +--------------------------+--------------------------------+ | PyTorch Lightning | 2.4.0 | +--------------------------+--------------------------------+ | Megatron Core | 0.9.0 | +--------------------------+--------------------------------+ | Transformer Engine | 1.5.0 | +--------------------------+--------------------------------+ | Flash Attention | v2.6 | +--------------------------+--------------------------------+ | Transformers | 4.44.0 | +--------------------------+--------------------------------+ Supported features and models ============================= Megatron-LM provides the following key features to train large language models efficiently: - Transformer Engine (TE) - APEX - GEMM tuning - Torch.compile - 3D parallelism: TP + SP + CP - Distributed optimizer - Flash Attention (FA) 2 - Fused kernels - Pre-training .. _amd-megatron-lm-model-support-24-12: The following models are pre-optimized for performance on the AMD Instinct MI300X GPU. * Llama 2 7B * Llama 2 70B * Llama 3 8B * Llama 3 70B * Llama 3.1 8B * Llama 3.1 70B Prerequisite system validation steps ==================================== Complete the following system validation and optimization steps to set up your system before starting training. Disable NUMA auto-balancing --------------------------- Generally, application performance can benefit from disabling NUMA auto-balancing. However, it might be detrimental to performance with certain types of workloads. Run the command ``cat /proc/sys/kernel/numa_balancing`` to check your current NUMA (Non-Uniform Memory Access) settings. Output ``0`` indicates this setting is disabled. If there is no output or the output is ``1``, run the following command to disable NUMA auto-balancing. .. code-block:: shell sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' See :ref:`System validation and optimization ` for more information. Hardware verification with ROCm ------------------------------- Use the command ``rocm-smi --setperfdeterminism 1900`` to set the max clock speed up to 1900 MHz instead of the default 2100 MHz. This can reduce the chance of a PCC event lowering the attainable GPU clocks. This setting will not be required for new IFWI releases with the production PRC feature. You can restore this setting to its default value with the ``rocm-smi -r`` command. Run the command: .. code-block:: shell rocm-smi --setperfdeterminism 1900 See `Hardware verification with ROCm `_ for more information. RCCL Bandwidth Test ------------------- ROCm Collective Communications Library (RCCL) is a standalone library of standard collective communication routines for GPUs. See the :doc:`RCCL documentation ` for more information. Before starting pre-training, running a RCCL bandwidth test helps ensure that the multi-GPU or multi-node setup is optimized for efficient distributed training. Running the RCCL bandwidth test helps verify that: - The GPUs can communicate across nodes or within a single node. - The interconnect (such as InfiniBand, Ethernet, or Infinite fabric) is functioning as expected and provides adequate bandwidth for communication. - No hardware setup or cabling issues could affect the communication between GPUs Tuning and optimizing hyperparameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In distributed training, specific hyperparameters related to distributed communication can be tuned based on the results of the RCCL bandwidth test. These variables are already set in the Docker image: .. code-block:: shell # force all RCCL streams to be high priority export TORCH_NCCL_HIGH_PRIORITY=1 # specify which RDMA interfaces to use for communication export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 # define the Global ID index used in RoCE mode export NCCL_IB_GID_INDEX=3 # avoid data corruption/mismatch issue that existed in past releases export RCCL_MSCCL_ENABLE=0 Running the RCCL Bandwidth Test ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It's recommended you run the RCCL bandwidth test before launching training. It ensures system performance is sufficient to launch training. RCCL is not included in the AMD Megatron-LM Docker image; follow the instructions in ``__ to get started. See :ref:`mi300x-rccl` for more information. Run on 8 GPUs (``-g 8``), scanning from 8 bytes to 10 GB: .. code-block:: shell ./build/all_reduce_perf -b 8 -e 10G -f 2 -g 8 .. image:: /data/how-to/rocm-for-ai/rccl-tests-8-gpu.png :width: 800 Using one MPI process per GPU and ``-g 1`` for performance-oriented runs on both single-node and multi-node is recommended. So, a run on 8 GPUs looks something like: .. code-block:: shell mpirun -np 8 --bind-to numa ./build/all_reduce_perf -b 8 -e 10G -f 2 -g 1 .. image:: /data/how-to/rocm-for-ai/rccl-tests-1-mpi-process-per-gpu.png :width: 800 Running with one MPI process per GPU ensures a one-to-one mapping for CPUs and GPUs, which can be beneficial for smaller message sizes. This better represents the real-world use of RCCL in deep learning frameworks like PyTorch and TensorFlow. Use the following script to run the RCCL test for four MI300X GPU nodes. Modify paths and node addresses as needed. .. code-block:: /home/$USER/ompi_for_gpu/ompi/bin/mpirun -np 32 -H tw022:8,tw024:8,tw010:8, tw015:8 \ --mca pml ucx \ --mca btl ^openib \ -x NCCL_SOCKET_IFNAME=ens50f0np0 \ -x NCCL_IB_HCA=rdma0:1,rdma1:1,rdma2:1,rdma3:1,rdma4:1,rdma5:1,rdma6:1,rdma7:1 \ -x NCCL_IB_GID_INDEX=3 \ -x NCCL_MIN_NCHANNELS=40 \ -x NCCL_DEBUG=version \ $HOME/rccl-tests/build/all_reduce_perf -b 8 -e 8g -f 2 -g 1 .. image:: /data/how-to/rocm-for-ai/rccl-tests-4-mi300x-gpu-nodes.png :width: 800 .. _mi300x-amd-megatron-lm-training-v2412: Start training on MI300X GPUs ===================================== The pre-built ROCm Megatron-LM environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 2 and Llama 3.1. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on the MI300X GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v2412: Download the Docker image and required packages ----------------------------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/megatron-lm:24.12-dev 2. Launch the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $CACHE_DIR:/root/.cache --name megatron-dev-env rocm/megatron-lm:24.12-dev /bin/bash 3. Clone the ROCm Megatron-LM repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/Megatron-LM cd Megatron-LM .. note:: This release is validated with ``ROCm/Megatron-LM`` commit `bb93ccb `_. Checking out this specific commit is recommended for a stable and reproducible environment. .. code-block:: shell git checkout bb93ccbfeae6363c67b361a97a27c74ab86e7e92 Prepare training datasets ------------------------- If you already have the preprocessed data, you can skip this section. Use the following command to process datasets. We use GPT data as an example. You may change the merge table, use an end-of-document token, remove sentence splitting, and use the tokenizer type. .. code-block:: shell python tools/preprocess_data.py \ --input my-corpus.json \ --output-prefix my-gpt2 \ --vocab-file gpt2-vocab.json \ --tokenizer-type GPT2BPETokenizer \ --merge-file gpt2-merges.txt \ --append-eod In this case, the automatically generated output files are named ``my-gpt2_text_document.bin`` and ``my-gpt2_text_document.idx``. .. image:: /data/how-to/rocm-for-ai/prep-training-datasets-my-gpt2-text-document.png :width: 800 .. _amd-megatron-lm-environment-setup-v2412: Environment setup ----------------- In the ``examples/llama`` directory of Megatron-LM, if you're working with Llama 2 7B or Llama 2 70 B, use the ``train_llama2.sh`` configuration script. Likewise, if you're working with Llama 3 or Llama 3.1, then use ``train_llama3.sh`` and update the configuration script accordingly. Network interface ^^^^^^^^^^^^^^^^^ To avoid connectivity issues, ensure the correct network interface is set in your training scripts. 1. Run the following command to find the active network interface on your system. .. code-block:: shell ip a 2. Update the ``NCCL_SOCKET_IFNAME`` and ``GLOO_SOCKET_IFNAME`` variables with your system’s network interface. For example: .. code-block:: shell export NCCL_SOCKET_IFNAME=ens50f0np0 export GLOO_SOCKET_IFNAME=ens50f0np0 Dataset options ^^^^^^^^^^^^^^^ You can use either mock data or real data for training. * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: shell DATA_DIR="/root/.cache/data" # Change to where your dataset is stored DATA_PATH=${DATA_DIR}/bookcorpus_text_sentence .. code-block:: shell --data-path $DATA_PATH Ensure that the files are accessible inside the Docker container. * Mock data can be useful for testing and validation. If you're using mock data, replace ``--data-path $DATA_PATH`` with the ``--mock-data`` option. .. code-block:: shell --mock-data Tokenizer ^^^^^^^^^ Tokenization is the process of converting raw text into tokens that can be processed by the model. For Llama models, this typically involves sub-word tokenization, where words are broken down into smaller units based on a fixed vocabulary. The tokenizer is trained along with the model on a large corpus of text, and it learns a fixed vocabulary that can represent a wide range of text from different domains. This allows Llama models to handle a variety of input sequences, including unseen words or domain-specific terms. To train any of the Llama 2 models that this Docker image supports, use the ``Llama2Tokenizer``. To train any of Llama 3 and Llama 3.1 models that this Docker image supports, use the ``HuggingFaceTokenizer``. Set the Hugging Face model link in the ``TOKENIZER_MODEL`` variable. For example, if you're using the Llama 3.1 8B model: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.1-8B Run benchmark tests ------------------- .. note:: If you're running **multi node training**, update the following environment variables. They can also be passed as command line arguments. * Change ``localhost`` to the master node's hostname: .. code-block:: shell MASTER_ADDR="${MASTER_ADDR:-localhost}" * Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``): .. code-block:: shell NNODES="${NNODES:-1}" * Set the rank of each node (0 for master, 1 for the first worker node, and so on): .. code-block:: shell NODE_RANK="${NODE_RANK:-0}" * Use this command to run a performance benchmark test of any of the Llama 2 models that this Docker image supports (see :ref:`variables `). .. code-block:: shell {variables} bash examples/llama/train_llama2.sh * Use this command to run a performance benchmark test of any of the Llama 3 and Llama 3.1 models that this Docker image supports (see :ref:`variables `). .. code-block:: shell {variables} bash examples/llama/train_llama3.sh .. _amd-megatron-lm-benchmark-test-vars-v2412: The benchmark tests support the same set of variables: +--------------------------+-----------------------+-----------------------+ | Name | Options | Description | +==========================+=======================+=======================+ | ``TEE_OUTPUT`` | 0 or 1 | 0: disable training | | | | log | | | | | | | | 1: enable training | | | | log | +--------------------------+-----------------------+-----------------------+ | ``MBS`` | | Micro batch size | +--------------------------+-----------------------+-----------------------+ | ``BS`` | | Batch size | +--------------------------+-----------------------+-----------------------+ | ``TP`` | 1, 2, 4, 8 | Tensor parallel | +--------------------------+-----------------------+-----------------------+ | ``TE_FP8`` | 0 or 1 | Datatype. | | | | If it is set to 1, | | | | FP8. | | | | | | | | If it is set to 0. | | | | BP16 | +--------------------------+-----------------------+-----------------------+ | ``NO_TORCH_COMPILE`` | 0 or 1 | If it is set to 1, | | | | enable torch.compile. | | | | | | | | If it is set to 0. | | | | Disable torch.compile | | | | (default) | +--------------------------+-----------------------+-----------------------+ | ``SEQ_LENGTH`` | | Input sequence length | +--------------------------+-----------------------+-----------------------+ | ``GEMM_TUNING`` | 0 or 1 | If it is set to 1, | | | | enable gemm tuning. | | | | | | | | If it is set to 0, | | | | disable gemm tuning | +--------------------------+-----------------------+-----------------------+ | ``USE_FLASH_ATTN`` | 0 or 1 | 0: disable flash | | | | attention | | | | | | | | 1: enable flash | | | | attention | +--------------------------+-----------------------+-----------------------+ | ``ENABLE_PROFILING`` | 0 or 1 | 0: disable torch | | | | profiling | | | | | | | | 1: enable torch | | | | profiling | +--------------------------+-----------------------+-----------------------+ | ``MODEL_SIZE`` | | The size of the mode: | | | | 7B/70B, etc. | +--------------------------+-----------------------+-----------------------+ | ``TOTAL_ITERS`` | | Total number of | | | | iterations | +--------------------------+-----------------------+-----------------------+ | ``transformer-impl`` | transformer_engine or | Enable transformer | | | local | engine by default | +--------------------------+-----------------------+-----------------------+ Benchmarking examples ^^^^^^^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Single node training :sync: single Use this command to run training with Llama 2 7B model on a single node. You can specify MBS, BS, FP, datatype, and so on. .. code-block:: bash TEE_OUTPUT=1 MBS=5 BS=120 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script `. See the sample output: .. image:: /data/how-to/rocm-for-ai/llama2-7b-training-log-sample.png :width: 800 .. tab-item:: Multi node training :sync: multi Launch the Docker container on each node. In this example, run training with Llama 2 7B model on 2 nodes with specific MBS, BS, FP, datatype, and so on. On the master node: .. code-block:: bash TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh On the worker node: .. code-block:: bash TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script `. Sample output for 2-node training: Master node: .. image:: /data/how-to/rocm-for-ai/2-node-training-master.png :width: 800 Worker node: .. image:: /data/how-to/rocm-for-ai/2-node-training-worker.png :width: 800 Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM on ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. For a unified training solution on AMD GPUs with ROCm, the `rocm/megatron-lm `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including Megatron-LM and :doc:`torchtitan <../primus-pytorch>`. Primus with Megatron is designed to replace this ROCm Megatron-LM training workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`megatron-lm-primus-migration-guide`. The `Megatron-LM framework for ROCm `_ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides ready-to-use Docker images for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/megatron-lm-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-megatron-lm-model-support-v2510: Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm on AMD Instinct MI300X Series GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v2510: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training-v2510: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v2510: Download the Docker image ------------------------- .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/megatron-lm-benchmark-models.yaml {% set docker = data.docker %} 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash 4. **Megatron-LM backward compatibility setup** -- this Docker is primarily intended for use with Primus, but it maintains Megatron-LM compatibility with limited support. To roll back to using Megatron-LM, follow these steps: .. code-block:: shell cd /workspace/Megatron-LM/ pip uninstall megatron-core pip install -e . The Docker container hosts a verified commit of ``__. .. _amd-megatron-lm-environment-setup-v2510: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Multi-node configuration ------------------------ Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`amd-megatron-lm-multi-node-examples` for example run commands. .. _amd-megatron-lm-tokenizer-v2510: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-7B" .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-72B" Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b pyt_megatron_lm_train_llama-3.1-70b-proxy For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b pyt_megatron_lm_train_qwen2.5-72b If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir -p temp/qwen-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/qwen-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. _amd-megatron-lm-run-training-v2510: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Before starting training, export the following environment variables. .. tab-set:: .. tab-item:: MI355X and MI350X .. code-block:: shell export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NVTE_CK_USES_BWD_V3=1 .. tab-item:: MI325X and MI300X .. code-block:: shell export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NVTE_CK_USES_BWD_V3=1 # Set this on MI325X/MI300X only export NVTE_CK_IS_V3_ATOMIC_FP32=1 Single node training -------------------- .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.3-70B-Instruct \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MBS=2 \ BS=16 \ TE_FP8=0 \ TP=1 \ PP=1 \ FSDP=1 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=512 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=10 \ GEMM_TUNING=0 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=512 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=10 \ GEMM_TUNING=1 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. To run the training on a single node for Llama 3.1 70B FP8, use the following command. .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell FP8_WEIGHT_TRANSPOSE_CACHE=0 \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ NUM_LAYERS=40 \ bash examples/llama/train_llama3.sh .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=7 \ BS=56 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 FORCE_BALANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 GEMM_TUNING=1 \ PR=bf16 \ MBS=4 \ AC=none \ SEQ_LEN=4096 \ PAD_LEN=4096 \ TRAIN_ITERS=20 \ bash examples/deepseek_v2/train_deepseekv2.sh .. note:: Note that DeepSeek-V2-Lite is experiencing instability due to GPU memory access fault for large iterations. For stability, it's recommended to use Primus for this workload. See :doc:`primus-megatron`. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=0 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=none \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=4096 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x7B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=4 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=full \ NUM_LAYERS=4 \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=8192 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x22B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=0 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B For FP8, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=1 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ FSDP=1 \ CP=1 \ PP=1 \ MBS=3 \ BS=24 \ TE_FP8=0 \ MODEL_SIZE=72 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-72B \ RECOMPUTE_ACTIVATIONS=full \ CKPT_FORMAT=torch_dist .. _amd-megatron-lm-multi-node-examples-v2510: Multi-node training examples ---------------------------- To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=0 \ bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=1 \ bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars-v2510: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM for ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. The Megatron-LM framework for ROCm is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X Series GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama 2, Llama 3, Llama 3.1, and DeepSeek, enabling developers to train next-generation AI models more efficiently. See the GitHub repository at ``__. AMD provides a ready-to-use Docker image for MI300X GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.0 | +--------------------------+--------------------------------+ | PyTorch | 2.7.0a0+git637433 | +--------------------------+--------------------------------+ | Python | 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.11 | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0 | +--------------------------+--------------------------------+ | hipBLASLt | git258a2162 | +--------------------------+--------------------------------+ | Triton | 3.1 | +--------------------------+--------------------------------+ Supported features and models ============================= Megatron-LM provides the following key features to train large language models efficiently: - Transformer Engine (TE) - APEX - GEMM tuning - Torch.compile - 3D parallelism: TP + SP + CP - Distributed optimizer - Flash Attention (FA) 3 - Fused kernels - Pre-training .. _amd-megatron-lm-model-support-25-3: The following models are pre-optimized for performance on the AMD Instinct MI300X GPU. * Llama 2 7B * Llama 2 70B * Llama 3 8B * Llama 3 70B * Llama 3.1 8B * Llama 3.1 70B * DeepSeek-V2-Lite .. note:: Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). System validation ================= If you have already validated your system settings, skip this step. Otherwise, complete the :ref:`system validation and optimization steps ` to set up your system before starting training. Disable NUMA auto-balancing --------------------------- Generally, application performance can benefit from disabling NUMA auto-balancing. However, it might be detrimental to performance with certain types of workloads. Run the command ``cat /proc/sys/kernel/numa_balancing`` to check your current NUMA (Non-Uniform Memory Access) settings. Output ``0`` indicates this setting is disabled. If there is no output or the output is ``1``, run the following command to disable NUMA auto-balancing. .. code-block:: shell sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' See :ref:`System validation and optimization ` for more information. .. _mi300x-amd-megatron-lm-training-v253: Environment setup ================= The pre-built ROCm Megatron-LM environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 3.1, Llama 2, and DeepSeek V2. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on the MI300X GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v253: Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/megatron-lm:v25.3 2. Launch the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 64G --name megatron_training_env rocm/megatron-lm:v25.3 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash The Docker container includes a pre-installed, verified version of Megatron-LM from the `release branch `_. .. _amd-megatron-lm-environment-setup-v253: Configuration scripts --------------------- .. tab-set:: .. tab-item:: Llama :sync: llama If you're working with Llama 2 7B or Llama 2 70 B, use the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__. Likewise, if you're working with Llama 3 or Llama 3.1, then use ``train_llama3.sh`` and update the configuration script accordingly. .. tab-item:: DeepSeek V2 :sync: deepseek Use the ``train_deepseek_v2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ and update the configuration script accordingly. Network interface ^^^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Llama :sync: llama To avoid connectivity issues in multi-node deployments, ensure the correct network interface is set in your training scripts. 1. Run the following command (outside the container) to find the active network interface on your system. .. code-block:: shell ip a 2. Update the ``NCCL_SOCKET_IFNAME`` and ``GLOO_SOCKET_IFNAME`` variables with your system’s network interface. For example: .. code-block:: shell export NCCL_SOCKET_IFNAME=ens50f0np0 export GLOO_SOCKET_IFNAME=ens50f0np0 Dataset options ^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Llama :sync: llama You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH=${DATA_PATH:-"/data/bookcorpus_text_sentence"} # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. tab-item:: DeepSeek V2 :sync: deepseek If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.idx You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_DIR="/root/data/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Tokenizer ^^^^^^^^^ Tokenization is the process of converting raw text into tokens that can be processed by the model. For Llama models, this typically involves sub-word tokenization, where words are broken down into smaller units based on a fixed vocabulary. The tokenizer is trained along with the model on a large corpus of text, and it learns a fixed vocabulary that can represent a wide range of text from different domains. This allows Llama models to handle a variety of input sequences, including unseen words or domain-specific terms. .. tab-set:: .. tab-item:: Llama :sync: llama To train any of the Llama 2 models that :ref:`this Docker image supports `, use the ``Llama2Tokenizer``. To train any of Llama 3 and Llama 3.1 models that this Docker image supports, use the ``HuggingFaceTokenizer``. Set the Hugging Face model link in the ``TOKENIZER_MODEL`` variable. For example, if you're using the Llama 3.1 8B model: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.1-8B .. tab-item:: DeepSeek V2 :sync: deepseek To train any of the DeepSeek V2 models that :ref:`this Docker image supports `, use the ``DeepSeekV2Tokenizer``. Multi-node training ^^^^^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Llama :sync: llama If you're running multi-node training, update the following environment variables. They can also be passed as command line arguments. * Change ``localhost`` to the master node's hostname: .. code-block:: shell MASTER_ADDR="${MASTER_ADDR:-localhost}" * Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``): .. code-block:: shell NNODES="${NNODES:-1}" * Set the rank of each node (0 for master, 1 for the first worker node, and so on): .. code-block:: shell NODE_RANK="${NODE_RANK:-0}" * Set ``DATA_CACHE_PATH`` to a common directory accessible by all the nodes (for example, an NFS directory) for multi-node runs: .. code-block:: shell DATA_CACHE_PATH=/root/cache # Set to a common directory for multi-node runs * For multi-node runs, make sure the correct network drivers are installed on the nodes. If inside a Docker, either install the drivers inside the Docker container or pass the network drivers from the host while creating the Docker container. Start training on AMD Instinct GPUs =========================================== The prebuilt Megatron-LM with ROCm training environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 3.1 and Llama 2. This container should not be expected to provide generalized performance across all training workloads. You can expect the container to perform in the model configurations described in the following section, but other configurations are not validated by AMD. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. tab-set:: .. tab-item:: Llama :sync: llama .. tab-set:: .. tab-item:: Single node training :sync: single-node To run training on a single node, navigate to the Megatron-LM folder and use the following command: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=128 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 bash examples/llama/train_llama3.sh .. tab-item:: Multi-node training :sync: multi-node To run training on multiple nodes, launch the Docker container on each node. For example, for a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 MASTER_ADDR=IP_NODE0 NNODES=2 NODE_RANK=0 bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 MASTER_ADDR=IP_NODE0 NNODES=2 NODE_RANK=1 bash examples/llama/train_llama3.sh .. tab-item:: DeepSeek V2 :sync: deepseek To run the training on a single node, go to ``/Megatron-LM`` folder and use the following command: .. code-block:: shell cd /workspace/Megatron-LM GEMM_TUNING=1 PR=bf16 MBS=4 AC=none bash examples/deepseek_v2/train_deepseekv2.sh Key options ----------- .. _amd-megatron-lm-benchmark-test-vars-v253: The benchmark tests support the following sets of variables: .. tab-set:: .. tab-item:: Llama :sync: llama ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for BP16 (default) or ``1`` for FP8 GEMMs. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data provided by you. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``SEQ_LENGTH`` Input sequence length. .. tab-item:: DeepSeek V2 :sync: deepseek ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data provided by you. ``MBS`` Micro batch size. ``GBS`` Global batch size. Benchmarking examples --------------------- .. tab-set:: .. tab-item:: Llama :sync: llama .. tab-set:: .. tab-item:: Single node training :sync: single-node Use this command to run training with Llama 2 7B model on a single node. You can specify MBS, BS, FP, datatype, and so on. .. code-block:: bash TEE_OUTPUT=1 MBS=5 BS=120 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script `. See the sample output: .. image:: /data/how-to/rocm-for-ai/llama2-7b-training-log-sample.png :width: 800 .. tab-item:: Multi-node training :sync: multi-node Launch the Docker container on each node. In this example, run training with Llama 2 7B model on 2 nodes with specific MBS, BS, FP, datatype, and so on. On the master node: .. code-block:: bash TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh On the worker node: .. code-block:: bash TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script `. Sample output for 2-node training: Master node: .. image:: /data/how-to/rocm-for-ai/2-node-training-master.png :width: 800 Worker node: .. image:: /data/how-to/rocm-for-ai/2-node-training-worker.png :width: 800 Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM for ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. The Megatron-LM framework for ROCm is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X Series GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama 2, Llama 3, Llama 3.1, and DeepSeek, enabling developers to train next-generation AI models more efficiently. See the GitHub repository at ``__. AMD provides a ready-to-use Docker image for MI300X Series GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.0 | +--------------------------+--------------------------------+ | PyTorch | 2.7.0a0+git637433 | +--------------------------+--------------------------------+ | Python | 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.11 | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0 | +--------------------------+--------------------------------+ | hipBLASLt | git258a2162 | +--------------------------+--------------------------------+ | Triton | 3.1 | +--------------------------+--------------------------------+ Supported features and models ============================= Megatron-LM provides the following key features to train large language models efficiently: - Transformer Engine (TE) - APEX - GEMM tuning - Torch.compile - 3D parallelism: TP + SP + CP - Distributed optimizer - Flash Attention (FA) 3 - Fused kernels - Pre-training .. _amd-megatron-lm-model-support-25-4: The following models are pre-optimized for performance on AMD Instinct MI300X Series GPUs. * Llama 3.1 8B * Llama 3.1 70B * Llama 3 8B * Llama 3 70B * Llama 2 7B * Llama 2 70B * DeepSeek-V2-Lite .. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v254: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the :doc:`latest version of this training benchmarking environment <../megatron-lm>`. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= If you have already validated your system settings, including NUMA auto-balancing, skip this step. Otherwise, complete the :ref:`system validation and optimization steps ` to set up your system before starting training. .. _mi300x-amd-megatron-lm-training-v254: Environment setup ================= The prebuilt ROCm Megatron-LM environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 3.1, Llama 2, and DeepSeek V2. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v254: Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/megatron-lm:v25.4 2. Launch the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --device /dev/infiniband --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 64G --name megatron_training_env rocm/megatron-lm:v25.4 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash The Docker container includes a pre-installed, verified version of the ROCm Megatron-LM development branch ``__ (commit `fd6f01 `_). .. _amd-megatron-lm-environment-setup-v254: Configuration scripts --------------------- .. tab-set:: .. tab-item:: Llama :sync: llama If you're working with Llama 2 7B or Llama 2 70 B, use the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__. Likewise, if you're working with Llama 3 or Llama 3.1, use ``train_llama3.sh`` and update the configuration script accordingly. .. tab-item:: DeepSeek V2 :sync: deepseek Use the ``train_deepseek_v2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ and update the configuration script accordingly. Network interface ^^^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Llama :sync: llama Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface that has an IP address in the same subnet as your other nodes. Then, update the following variables in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 export GLOO_SOCKET_IFNAME=ens50f0np0 Dataset options ^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Llama :sync: llama You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Two datasets are supported: ``DATASET=wiki`` and ``DATASET=bookcorpus``. Use the following command to download the dataset. .. code-block:: shell DATASET=wiki bash examples/llama/prepare_dataset.sh # For wiki-en dataset DATASET=bookcorpus bash examples/llama/prepare_dataset.sh # For bookcorpus dataset .. tab-item:: DeepSeek V2 :sync: deepseek If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.idx You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_DIR="/root/data/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Tokenizer ^^^^^^^^^ Tokenization is the process of converting raw text into tokens that can be processed by the model. For Llama models, this typically involves sub-word tokenization, where words are broken down into smaller units based on a fixed vocabulary. The tokenizer is trained along with the model on a large corpus of text, and it learns a fixed vocabulary that can represent a wide range of text from different domains. This allows Llama models to handle a variety of input sequences, including unseen words or domain-specific terms. You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded to the default tokenizer model path: ``${DATA_DIR}/tokenizer_llama3`` or ``${DATA_DIR}/tokenizer_llama2``. .. tab-set:: .. tab-item:: Llama :sync: llama To train any of the Llama 2 models that :ref:`this Docker image supports `, use the ``Llama2Tokenizer`` or the default ``HuggingFaceTokenizer``. To train any of Llama 3 and Llama 3.1 models that this Docker image supports, use the ``HuggingFaceTokenizer``. Set the Hugging Face model path in the ``TOKENIZER_MODEL`` variable. For example, if you're using the Llama 3.1 8B model: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.1-8B .. note:: If you don't already have the Llama 3.1 tokenizer locally, set your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. If you encounter the following error, set ``HF_TOKEN`` to your access-authorized Hugging Face token. .. code-block:: shell OSError: You are trying to access a gated repo. # pass your HF_TOKEN export HF_TOKEN=$your_personal_hf_token .. tab-item:: DeepSeek V2 :sync: deepseek To train any of the DeepSeek V2 models that :ref:`this Docker image supports `, use the ``DeepSeekV2Tokenizer``. Multi-node training ^^^^^^^^^^^^^^^^^^^ .. tab-set:: .. tab-item:: Llama :sync: llama If you're running multi-node training, update the following environment variables. They can also be passed as command line arguments. * Change ``localhost`` to the master node's hostname: .. code-block:: shell MASTER_ADDR="${MASTER_ADDR:-localhost}" * Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``): .. code-block:: shell NNODES="${NNODES:-1}" * Set the rank of each node (0 for master, 1 for the first worker node, and so on): .. code-block:: shell NODE_RANK="${NODE_RANK:-0}" * Set ``DATA_CACHE_PATH`` to a common directory accessible by all the nodes (for example, an NFS directory) for multi-node runs: .. code-block:: shell DATA_CACHE_PATH=/root/cache # Set to a common directory for multi-node runs * For multi-node runs, make sure the correct network drivers are installed on the nodes. If inside a Docker container, either install the drivers inside the Docker container or pass the network drivers from the host while creating the Docker container. .. code-block:: shell # Specify which RDMA interfaces to use for communication export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 Start training on AMD Instinct GPUs =========================================== The prebuilt Megatron-LM with ROCm training environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 3.1 and Llama 2. This container should not be expected to provide generalized performance across all training workloads. You can expect the container to perform in the model configurations described in the following section, but other configurations are not validated by AMD. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. tab-set:: .. tab-item:: Llama :sync: llama .. tab-set:: .. tab-item:: Single node training :sync: single-node To run training on a single node, navigate to the Megatron-LM folder and use one of the following commands. - For Llama 3.1 8B FP8: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=128 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh - For Llama 3.1 8B BF16: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=128 TP=1 TE_FP8=0 SEQ_LENGTH=8192 MODEL_SIZE=8 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh - For Llama 2 7B FP8: .. code-block:: shell TEE_OUTPUT=1 MBS=4 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=4096 MODEL_SIZE=7 TOTAL_ITERS=50 bash examples/llama/train_llama2.sh - For Llama 2 7B BF16: .. code-block:: shell TEE_OUTPUT=1 MBS=4 BS=256 TP=1 TE_FP8=0 SEQ_LENGTH=4096 MODEL_SIZE=7 TOTAL_ITERS=50 bash examples/llama/train_llama2.sh To run training with FSDP2 enabled, add the ``FSDP=1`` argument. For example: - For Llama 3 70B BF16: .. code-block:: shell TEE_OUTPUT=1 MBS=3 BS=24 TP=1 TE_FP8=0 FSDP=1 RECOMPUTE=1 SEQ_LENGTH=8192 MODEL_SIZE=70 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh - For Llama 2 70B BF16: .. code-block:: shell TEE_OUTPUT=1 MBS=3 BS=56 TP=1 TE_FP8=0 FSDP=1 RECOMPUTE=1 SEQ_LENGTH=4096 MODEL_SIZE=70 TOTAL_ITERS=50 bash examples/llama/train_llama2.sh .. note:: It's suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, and ``FP16`` precision. .. tab-item:: Multi-node training :sync: multi-node To run training on multiple nodes, launch the Docker container on each node. For example, for a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 MASTER_ADDR=IP_NODE0 NNODES=2 NODE_RANK=0 bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 MASTER_ADDR=IP_NODE0 NNODES=2 NODE_RANK=1 bash examples/llama/train_llama3.sh .. tab-item:: DeepSeek V2 :sync: deepseek To run the training on a single node, go to ``/Megatron-LM`` folder and use the following command: .. code-block:: shell cd /workspace/Megatron-LM GEMM_TUNING=1 PR=bf16 MBS=4 AC=none SEQ_LEN=4096 PAD_LEN=4096 TRAIN_ITERS=50 bash examples/deepseek_v2/train_deepseekv2.sh Key options ----------- .. _amd-megatron-lm-benchmark-test-vars-v254: The benchmark tests support the following sets of variables: .. tab-set:: .. tab-item:: Llama :sync: llama ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automaticallyu disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``SEQ_LENGTH`` Input sequence length. .. tab-item:: DeepSeek V2 :sync: deepseek ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``TRAIN_ITERS`` The total number of iterations. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``GBS`` Global batch size. ``SEQ_LEN`` Input sequence length. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. Benchmarking examples --------------------- .. tab-set:: .. tab-item:: Llama :sync: llama .. tab-set:: .. tab-item:: Single node training :sync: single-node Use this command to run training with Llama 2 7B model on a single node. You can specify MBS, BS, FP, datatype, and so on. .. code-block:: bash TEE_OUTPUT=1 MBS=5 BS=120 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script `. See the sample output: .. image:: /data/how-to/rocm-for-ai/llama2-7b-training-log-sample.png :width: 800 .. tab-item:: Multi-node training :sync: multi-node Launch the Docker container on each node. In this example, run training with Llama 2 7B model on 2 nodes with specific MBS, BS, FP, datatype, and so on. On the master node: .. code-block:: bash TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh On the worker node: .. code-block:: bash TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1 SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script `. Sample output for 2-node training: Master node: .. image:: /data/how-to/rocm-for-ai/2-node-training-master.png :width: 800 Worker node: .. image:: /data/how-to/rocm-for-ai/2-node-training-worker.png :width: 800 Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM for ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. The `Megatron-LM framework for ROCm `_ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X Series GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides a ready-to-use Docker image for MI300X Series GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.4 | +--------------------------+--------------------------------+ | PyTorch | 2.8.0a0+gite2f9759 | +--------------------------+--------------------------------+ | Python | 3.12 or 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.13.0+bb061ade | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0 | +--------------------------+--------------------------------+ | hipBLASLt | 0.13.0-4f18bf6 | +--------------------------+--------------------------------+ | Triton | 3.3.0 | +--------------------------+--------------------------------+ | RCCL | 2.22.3 | +--------------------------+--------------------------------+ Megatron-LM provides the following key features to train large language models efficiently: - Transformer Engine (TE) - APEX - GEMM tuning - Torch.compile - 3D parallelism: TP + SP + CP - Distributed optimizer - Flash Attention (FA) 3 - Fused kernels - Pre-training .. _amd-megatron-lm-model-support-v255: The following models are pre-optimized for performance on AMD Instinct MI300X Series GPUs. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.5-benchmark-models.yaml Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data["megatron-lm_benchmark"].model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v255: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training-v255: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v255: Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. tab-set:: .. tab-item:: Ubuntu 24.04 + Python 3.12 :sync: py312 .. code-block:: shell docker pull rocm/megatron-lm:v25.5_py312 .. tab-item:: Ubuntu 22.04 + Python 3.10 :sync: py310 .. code-block:: shell docker pull rocm/megatron-lm:v25.5_py310 2. Launch the Docker container. .. tab-set:: .. tab-item:: Ubuntu 24.04 + Python 3.12 :sync: py312 .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --device /dev/infiniband --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 128G --name megatron_training_env rocm/megatron-lm:v25.5_py312 .. tab-item:: Ubuntu 22.04 + Python 3.10 :sync: py310 .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --device /dev/infiniband --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 128G --name megatron_training_env rocm/megatron-lm:v25.5_py310 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash The Docker container includes a pre-installed, verified version of the ROCm Megatron-LM development branch ``__, including necessary training scripts. .. _amd-megatron-lm-environment-setup-v255: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Network interface ----------------- Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface that has an IP address in the same subnet as your other nodes. Then, update the following variables in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 export GLOO_SOCKET_IFNAME=ens50f0np0 .. _amd-megatron-lm-tokenizer-v255: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/mmap_deepseekv2_datasets_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Multi-node configuration ------------------------ If you're running multi-node training, update the following environment variables. They can also be passed as command line arguments. Refer to the following example configurations. * Change ``localhost`` to the master node's hostname: .. code-block:: shell MASTER_ADDR="${MASTER_ADDR:-localhost}" * Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``): .. code-block:: shell NNODES="${NNODES:-1}" * Set the rank of each node (0 for master, 1 for the first worker node, and so on): .. code-block:: shell NODE_RANK="${NODE_RANK:-0}" * Set ``DATA_CACHE_PATH`` to a common directory accessible by all the nodes (for example, an NFS directory) for multi-node runs: .. code-block:: shell DATA_CACHE_PATH=/root/cache # Set to a common directory for multi-node runs * For multi-node runs, make sure the correct network drivers are installed on the nodes. If inside a Docker container, either install the drivers inside the Docker container or pass the network drivers from the host while creating the Docker container. .. code-block:: shell # Specify which RDMA interfaces to use for communication export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 Getting started =============== The prebuilt Megatron-LM with ROCm training environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama, DeepSeek, and Mixtral. This container should not be expected to provide generalized performance across all training workloads. You can expect the container to perform in the model configurations described in the following section, but other configurations are not validated by AMD. .. _amd-megatron-lm-run-training-v255: Run training ------------ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Single node training ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TEE_OUTPUT=1 RECOMPUTE=1 SEQ_LENGTH=8192 MBS=2 BS=16 TE_FP8=0 TP=1 PP=1 FSDP=1 MODEL_SIZE=70 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. Currently, FSDP is only compatible with BF16 precision. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=128 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=128 TP=1 TE_FP8=0 SEQ_LENGTH=8192 MODEL_SIZE=8 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TEE_OUTPUT=1 MBS=3 BS=24 TP=1 TE_FP8=0 FSDP=1 RECOMPUTE=1 SEQ_LENGTH=8192 MODEL_SIZE=70 TOTAL_ITERS=50 bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. Currently, FSDP is only compatible with BF16 precision. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 MBS=4 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=4096 MODEL_SIZE=7 TOTAL_ITERS=50 bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 MBS=4 BS=256 TP=1 TE_FP8=0 SEQ_LENGTH=4096 MODEL_SIZE=7 TOTAL_ITERS=50 bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TEE_OUTPUT=1 MBS=7 BS=56 TP=1 TE_FP8=0 FSDP=1 RECOMPUTE=1 SEQ_LENGTH=4096 MODEL_SIZE=70 TOTAL_ITERS=50 bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. Currently, FSDP is only compatible with BF16 precision. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell FORCE_BANLANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell GEMM_TUNING=1 PR=bf16 MBS=4 AC=none SEQ_LEN=4096 PAD_LEN=4096 TRAIN_ITERS=50 bash examples/deepseek_v2/train_deepseekv2.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell RECOMPUTE_NUM_LAYERS=0 TEE_OUTPUT=1 MBS=1 GBS=16 TP_SIZE=1 PP_SIZE=1 AC=none PR=bf16 EP_SIZE=8 ETP_SIZE=1 SEQLEN=4096 FORCE_BALANCE=true MOCK_DATA=1 RUN_ENV=cluster MODEL_SIZE=8x7B TRAIN_ITERS=50 bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell RECOMPUTE_NUM_LAYERS=4 TEE_OUTPUT=1 MBS=1 GBS=16 TP_SIZE=1 PP_SIZE=1 AC=full NUM_LAYERS=4 PR=bf16 EP_SIZE=8 ETP_SIZE=1 SEQLEN=8192 FORCE_BALANCE=true MOCK_DATA=1 RUN_ENV=cluster MODEL_SIZE=8x22B TRAIN_ITERS=50 bash examples/mixtral/train_mixtral_moe.sh Multi-node training ^^^^^^^^^^^^^^^^^^^ To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 MASTER_ADDR=IP_NODE0 NNODES=2 NODE_RANK=0 bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 MBS=2 BS=256 TP=1 TE_FP8=1 SEQ_LENGTH=8192 MODEL_SIZE=8 MASTER_ADDR=IP_NODE0 NNODES=2 NODE_RANK=1 bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars-v255: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM for ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. The `Megatron-LM framework for ROCm `__ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X Series GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides ready-to-use Docker images for MI300X Series GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.6-benchmark-models.yaml {% set dockers = data.dockers %} {% if dockers|length > 1 %} .. tab-set:: {% for docker in data.dockers %} .. tab-item:: ``{{ docker.pull_tag }}`` :sync: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} {% elif dockers|length == 1 %} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endif %} .. _amd-megatron-lm-model-support-v256: The following models are pre-optimized for performance on AMD Instinct MI300X Series GPUs. Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v256: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training-v256: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v256: Download the Docker image ------------------------- .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.6-benchmark-models.yaml {% set dockers = data.dockers %} 1. Use the following command to pull the Docker image from Docker Hub. {% if dockers|length > 1 %} .. tab-set:: {% for docker in data.dockers %} .. tab-item:: {{ docker.doc_name }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} {% elif dockers|length == 1 %} {% set docker = dockers[0] %} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endif %} 2. Launch the Docker container. {% if dockers|length > 1 %} .. tab-set:: {% for docker in data.dockers %} .. tab-item:: {{ docker.doc_name }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endfor %} {% elif dockers|length == 1 %} {% set docker = dockers[0] %} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endif %} 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash The Docker container includes a pre-installed, verified version of the ROCm Megatron-LM development branch ``__, including necessary training scripts. .. _amd-megatron-lm-environment-setup-v256: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Network interface ----------------- Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface that has an IP address in the same subnet as your other nodes. Then, update the following variables in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 export GLOO_SOCKET_IFNAME=ens50f0np0 .. _amd-megatron-lm-tokenizer-v256: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-7B" .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-72B" Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b pyt_megatron_lm_train_llama-3.1-70b-proxy For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b pyt_megatron_lm_train_qwen2.5-72b If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir -p temp/qwen-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/qwen-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Multi-node configuration ------------------------ If you're running multi-node training, update the following environment variables. They can also be passed as command line arguments. Refer to the following example configurations. * Change ``localhost`` to the master node's hostname: .. code-block:: shell MASTER_ADDR="${MASTER_ADDR:-localhost}" * Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``): .. code-block:: shell NNODES="${NNODES:-1}" * Set the rank of each node (0 for master, 1 for the first worker node, and so on): .. code-block:: shell NODE_RANK="${NODE_RANK:-0}" * Set ``DATA_CACHE_PATH`` to a common directory accessible by all the nodes (for example, an NFS directory) for multi-node runs: .. code-block:: shell DATA_CACHE_PATH=/root/cache # Set to a common directory for multi-node runs * For multi-node runs, make sure the correct network drivers are installed on the nodes. If inside a Docker container, either install the drivers inside the Docker container or pass the network drivers from the host while creating the Docker container. .. code-block:: shell # Specify which RDMA interfaces to use for communication export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 .. _amd-megatron-lm-run-training-v256: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Single node training -------------------- .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.3-70B-Instruct \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MBS=2 \ BS=16 \ TE_FP8=0 \ TP=1 \ PP=1 \ FSDP=1 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b-proxy To run the training on a single node for Llama 3.1 70B with proxy, use the following command. .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ NUM_LAYERS=40 \ bash examples/llama/train_llama3.sh .. note:: Use two or more nodes to run the *full* Llama 70B model with FP8 precision. .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=7 \ BS=56 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 FORCE_BALANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 GEMM_TUNING=1 \ PR=bf16 \ MBS=4 \ AC=none \ SEQ_LEN=4096 \ PAD_LEN=4096 \ TRAIN_ITERS=50 \ bash examples/deepseek_v2/train_deepseekv2.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=0 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=none \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=4096 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x7B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=4 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=full \ NUM_LAYERS=4 \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=8192 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x22B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=0 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B For FP8, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=1 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ FSDP=1 \ CP=1 \ PP=1 \ MBS=3 \ BS=24 \ TE_FP8=0 \ MODEL_SIZE=72 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-72B \ RECOMPUTE_ACTIVATIONS=full \ CKPT_FORMAT=torch_dist Multi-node training examples ---------------------------- To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=0 \ bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=1 \ bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars-v256: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM for ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. The ROCm Megatron-LM framework now has limited support with this Docker environment; it now focuses on Primus with Megatron-Core. See :doc:`../primus-megatron`. To learn how to migrate your existing workloads to Primus with Megatron-Core, see :doc:`megatron-lm-primus-migration-guide`. The `Megatron-LM framework for ROCm `_ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X Series GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides ready-to-use Docker images for MI300X Series GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: .. note:: This Docker environment is based on Python 3.10 and Ubuntu 22.04. For an alternative environment with Python 3.12 and Ubuntu 24.04, see the :doc:`previous ROCm Megatron-LM v25.6 Docker release `. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for docker in dockers %} .. tab-item:: ``{{ docker.pull_tag }}`` :sync: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. _amd-megatron-lm-model-support-v257: Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm on AMD Instinct MI300X Series GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v257: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training-v257: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v257: Download the Docker image ------------------------- .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} 1. Use the following command to pull the Docker image from Docker Hub. {% if dockers|length > 1 %} .. tab-set:: {% for docker in data.dockers %} .. tab-item:: {{ docker.doc_name }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} {% elif dockers|length == 1 %} {% set docker = dockers[0] %} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endif %} 2. Launch the Docker container. {% if dockers|length > 1 %} .. tab-set:: {% for docker in dockers %} .. tab-item:: {{ docker.doc_name }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endfor %} {% elif dockers|length == 1 %} {% set docker = dockers[0] %} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endif %} 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash 4. **Megatron-LM backward compatibility setup** -- this Docker is primarily intended for use with Primus, but it maintains Megatron-LM compatibility with limited support. To roll back to using Megatron-LM, follow these steps: .. code-block:: shell cd /workspace/Megatron-LM/ pip uninstall megatron-core pip install -e . The Docker container hosts ``__ at verified commit ``e8e9edc``. .. _amd-megatron-lm-environment-setup-v257: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Network interface ----------------- Update the network interface in the script to match your system's network interface. To find your network interface, run the following (outside of any Docker container): .. code-block:: bash ip a Look for an active interface that has an IP address in the same subnet as your other nodes. Then, update the following variables in the script, for example: .. code-block:: bash export NCCL_SOCKET_IFNAME=ens50f0np0 export GLOO_SOCKET_IFNAME=ens50f0np0 .. _amd-megatron-lm-tokenizer-v257: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-7B" .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-72B" Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b pyt_megatron_lm_train_llama-3.1-70b-proxy For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b pyt_megatron_lm_train_qwen2.5-72b If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir -p temp/qwen-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/qwen-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Multi-node configuration ------------------------ If you're running multi-node training, update the following environment variables. They can also be passed as command line arguments. Refer to the following example configurations. * Change ``localhost`` to the master node's hostname: .. code-block:: shell MASTER_ADDR="${MASTER_ADDR:-localhost}" * Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``): .. code-block:: shell NNODES="${NNODES:-1}" * Set the rank of each node (0 for master, 1 for the first worker node, and so on): .. code-block:: shell NODE_RANK="${NODE_RANK:-0}" * Set ``DATA_CACHE_PATH`` to a common directory accessible by all the nodes (for example, an NFS directory) for multi-node runs: .. code-block:: shell DATA_CACHE_PATH=/root/cache # Set to a common directory for multi-node runs * For multi-node runs, make sure the correct network drivers are installed on the nodes. If inside a Docker container, either install the drivers inside the Docker container or pass the network drivers from the host while creating the Docker container. .. code-block:: shell # Specify which RDMA interfaces to use for communication export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7 .. _amd-megatron-lm-run-training-v257: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Single node training -------------------- .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.3-70B-Instruct \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MBS=2 \ BS=16 \ TE_FP8=0 \ TP=1 \ PP=1 \ FSDP=1 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b-proxy To run the training on a single node for Llama 3.1 70B with proxy, use the following command. .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ NUM_LAYERS=40 \ bash examples/llama/train_llama3.sh .. note:: Use two or more nodes to run the *full* Llama 70B model with FP8 precision. .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=7 \ BS=56 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 FORCE_BALANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 GEMM_TUNING=1 \ PR=bf16 \ MBS=4 \ AC=none \ SEQ_LEN=4096 \ PAD_LEN=4096 \ TRAIN_ITERS=50 \ bash examples/deepseek_v2/train_deepseekv2.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=0 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=none \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=4096 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x7B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=4 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=full \ NUM_LAYERS=4 \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=8192 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x22B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=0 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B For FP8, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=1 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ FSDP=1 \ CP=1 \ PP=1 \ MBS=3 \ BS=24 \ TE_FP8=0 \ MODEL_SIZE=72 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-72B \ RECOMPUTE_ACTIVATIONS=full \ CKPT_FORMAT=torch_dist Multi-node training examples ---------------------------- To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=0 \ bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=1 \ bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars-v257: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM on ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. The ROCm Megatron-LM framework now has limited support with this Docker environment; it now focuses on Primus with the Megatron backend. See :doc:`../primus-megatron` for the latest details. To learn how to migrate your existing workloads to Primus with Megatron-Core, see :doc:`megatron-lm-primus-migration-guide`. The `Megatron-LM framework for ROCm `_ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X series GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides ready-to-use Docker images for MI300X series GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: .. note:: This Docker environment is based on Python 3.10 and Ubuntu 22.04. For an alternative environment with Python 3.12 and Ubuntu 24.04, see the :doc:`previous ROCm Megatron-LM v25.6 Docker release `. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for docker in dockers %} .. tab-item:: ``{{ docker.pull_tag }}`` :sync: {{ docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. _amd-megatron-lm-model-support-v258: Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm on AMD Instinct MI300X series GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements-v258: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training-v258: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements-v258: Download the Docker image ------------------------- .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} 1. Use the following command to pull the Docker image from Docker Hub. {% if dockers|length > 1 %} .. tab-set:: {% for docker in data.dockers %} .. tab-item:: {{ docker.doc_name }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} {% elif dockers|length == 1 %} {% set docker = dockers[0] %} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endif %} 2. Launch the Docker container. {% if dockers|length > 1 %} .. tab-set:: {% for docker in dockers %} .. tab-item:: {{ docker.doc_name }} :sync: {{ docker.pull_tag }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endfor %} {% elif dockers|length == 1 %} {% set docker = dockers[0] %} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endif %} 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash 4. **Megatron-LM backward compatibility setup** -- this Docker is primarily intended for use with Primus, but it maintains Megatron-LM compatibility with limited support. To roll back to using Megatron-LM, follow these steps: .. code-block:: shell cd /workspace/Megatron-LM/ pip uninstall megatron-core pip install -e . The Docker container hosts ``__ at verified commit ``e8e9edc``. .. _amd-megatron-lm-environment-setup-v258: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Multi-node configuration ------------------------ Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`amd-megatron-lm-multi-node-examples-v258` for example run commands. .. _amd-megatron-lm-tokenizer-v258: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-7B" .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-72B" Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b pyt_megatron_lm_train_llama-3.1-70b-proxy For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b pyt_megatron_lm_train_qwen2.5-72b If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir -p temp/qwen-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/qwen-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. _amd-megatron-lm-run-training-v258: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X series GPUs with the AMD Megatron-LM environment. Single node training -------------------- .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.3-70B-Instruct \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MBS=2 \ BS=16 \ TE_FP8=0 \ TP=1 \ PP=1 \ FSDP=1 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b-proxy To run the training on a single node for Llama 3.1 70B with proxy, use the following command. .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ NUM_LAYERS=40 \ bash examples/llama/train_llama3.sh .. note:: Use two or more nodes to run the *full* Llama 70B model with FP8 precision. .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=7 \ BS=56 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 FORCE_BALANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 GEMM_TUNING=1 \ PR=bf16 \ MBS=4 \ AC=none \ SEQ_LEN=4096 \ PAD_LEN=4096 \ TRAIN_ITERS=20 \ bash examples/deepseek_v2/train_deepseekv2.sh .. note:: Note that DeepSeek-V2-Lite is experiencing instability due to GPU memory access fault for large iterations. For stability, it's recommended to use Primus for this workload. See :doc:`../primus-megatron`. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=0 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=none \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=4096 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x7B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=4 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=full \ NUM_LAYERS=4 \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=8192 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x22B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=0 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B For FP8, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=1 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ FSDP=1 \ CP=1 \ PP=1 \ MBS=3 \ BS=24 \ TE_FP8=0 \ MODEL_SIZE=72 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-72B \ RECOMPUTE_ACTIVATIONS=full \ CKPT_FORMAT=torch_dist .. _amd-megatron-lm-multi-node-examples-v258: Multi-node training examples ---------------------------- To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=0 \ bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=1 \ bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars-v258: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ****************************************** Training a model with Megatron-LM on ROCm ****************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../megatron-lm` for the latest version. For a unified training solution on AMD GPUs with ROCm, the `rocm/megatron-lm `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including Megatron-LM and :doc:`torchtitan <../primus-pytorch>`. Primus with Megatron is designed to replace this ROCm Megatron-LM training workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`megatron-lm-primus-migration-guide`. The `Megatron-LM framework for ROCm `_ is a specialized fork of the robust Megatron-LM, designed to enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ GPUs, Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI workloads. It is purpose-built to support models like Llama, DeepSeek, and Mixtral, enabling developers to train next-generation AI models more efficiently. AMD provides ready-to-use Docker images for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components, including PyTorch, ROCm libraries, and Megatron-LM utilities. It contains the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. _amd-megatron-lm-model-support: Supported models ================ The following models are supported for training performance benchmarking with Megatron-LM and ROCm on AMD Instinct MI300X Series GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). .. _amd-megatron-lm-performance-measurements: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `__ page provides reference throughput and latency measurements for training popular AI models. .. important:: The performance data presented in `Performance results with AMD ROCm software `__ only reflects the latest version of this training benchmarking environment. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-megatron-lm-training: Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD Megatron-LM Docker image. .. _amd-megatron-lm-requirements: Download the Docker image ------------------------- .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/megatron-lm-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} 1. Use the following command to pull the Docker image from Docker Hub. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} 2. Launch the Docker container. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 128G \ --name megatron_training_env \ {{ docker.pull_tag }} {% endfor %} 3. Use these commands if you exit the ``megatron_training_env`` container and need to return to it. .. code-block:: shell docker start megatron_training_env docker exec -it megatron_training_env bash 4. **Megatron-LM backward compatibility setup** -- this Docker is primarily intended for use with Primus, but it maintains Megatron-LM compatibility with limited support. To roll back to using Megatron-LM, follow these steps: .. code-block:: shell cd /workspace/Megatron-LM/ pip uninstall megatron-core pip install -e . The Docker container hosts a verified commit of ``__. .. _amd-megatron-lm-environment-setup: Configuration ============= .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b Update the ``train_llama3.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b Update the ``train_llama2.sh`` configuration script in the ``examples/llama`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy Update the ``train_deepseekv3.sh`` configuration script in the ``examples/deepseek_v3`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b Update the ``train_deepseekv2.sh`` configuration script in the ``examples/deepseek_v2`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Update the ``train_mixtral_moe.sh`` configuration script in the ``examples/mixtral`` directory of ``__ to configure your training run. Options can also be passed as command line arguments as described in :ref:`Run training `. .. note:: See :ref:`Key options ` for more information on configuration options. Multi-node configuration ------------------------ Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`amd-megatron-lm-multi-node-examples` for example run commands. .. _amd-megatron-lm-tokenizer: Tokenizer --------- You can assign the path of an existing tokenizer to the ``TOKENIZER_MODEL`` as shown in the following examples. If the tokenizer is not found, it'll be downloaded if publicly available. .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b If you do not have Llama 3.3 tokenizer locally, you need to use your personal Hugging Face access token ``HF_TOKEN`` to download the tokenizer. See `Llama-3.3-70B-Instruct `_. After you are authorized, use your ``HF_TOKEN`` to download the tokenizer and set the variable ``TOKENIZER_MODEL`` to the tokenizer path. .. code-block:: shell export HF_TOKEN= The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.3-70B-Instruct" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-8B" .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="meta-llama/Llama-3.1-70B" .. container:: model-doc pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b The training script uses either the ``Llama2Tokenizer`` or ``HuggingFaceTokenizer`` by default. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V3" .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="deepseek-ai/DeepSeek-V2-Lite" .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy Download the Mixtral tokenizer. .. code-block:: shell mkdir tokenizer cd tokenizer export HF_TOKEN= wget --header="Authorization: Bearer $HF_TOKEN" -O ./tokenizer.model https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.model Use the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL=tokenizer/tokenizer.model .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-7B" .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b The training script uses the ``HuggingFaceTokenizer``. Set ``TOKENIZER_MODEL`` to the appropriate Hugging Face model path. .. code-block:: shell TOKENIZER_MODEL="Qwen/Qwen2.5-72B" Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``MOCK_DATA`` variable to toggle between mock and real data. The default value is ``1`` for enabled. .. code-block:: bash MOCK_DATA=1 * If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 DATA_PATH="/data/bookcorpus_text_sentence" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. Download the dataset ^^^^^^^^^^^^^^^^^^^^ .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b pyt_megatron_lm_train_llama-3.1-8b pyt_megatron_lm_train_llama-3.1-70b pyt_megatron_lm_train_llama-2-7b pyt_megatron_lm_train_llama-2-70b pyt_megatron_lm_train_llama-3.1-70b-proxy For Llama models, use the `prepare_dataset.sh `_ script to prepare your dataset. To download the dataset, set the ``DATASET`` variable to the dataset you'd like to use. Three datasets are supported: ``DATASET=wiki``, ``DATASET=fineweb``, and ``DATASET=bookcorpus``. .. code-block:: shell DATASET=wiki TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for wiki-en dataset DATASET=bookcorpus TOKENIZER_MODEL=NousResearch/Llama-2-7b-chat-hf bash examples/llama/prepare_dataset.sh #for bookcorpus dataset ``TOKENIZER_MODEL`` can be any accessible Hugging Face tokenizer. Remember to either pre-download the tokenizer or setup Hugging Face access otherwise when needed -- see the :ref:`Tokenizer ` section. .. note:: When training set ``DATA_PATH`` to the specific file name prefix pointing to the ``.bin`` or ``.idx`` as in the following example: .. code-block:: shell DATA_PATH="data/bookcorpus_text_sentence" # Change to where your dataset is stored. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b If you don't already have the dataset, download the DeepSeek dataset using the following commands: .. code-block:: shell mkdir deepseek-datasets cd deepseek-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/SlimPajama.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-train.json wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/deepseek-datasets/alpaca_zh-valid.json cd .. bash tools/run_make_pretraining_dataset_megatron.sh deepseek-datasets/SlimPajama.json DeepSeekV3Tokenizer text deepseek-datasets deepseek-ai/DeepSeek-V3 To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/deepseek-datasets" # Change to where your dataset is stored .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b pyt_megatron_lm_train_mixtral-8x22b-proxy If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir mixtral-datasets cd mixtral-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/mistral-datasets/wudao_mistralbpe_content_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/mixtral-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b pyt_megatron_lm_train_qwen2.5-72b If you don't already have the dataset, download the Mixtral dataset using the following commands: .. code-block:: shell mkdir -p temp/qwen-datasets wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.bin wget https://atp-modelzoo-wlcb-pai.oss-cn-wulanchabu.aliyuncs.com/release/models/pai-megatron-patch/qwen-datasets/wudao_qwenbpe_text_document.idx To train on this data, update the ``DATA_DIR`` variable to point to the location of your dataset. .. code-block:: bash MOCK_DATA=0 # Train on real data DATA_DIR="/qwen-datasets" # Change to where your dataset is stored Ensure that the files are accessible inside the Docker container. .. _amd-megatron-lm-run-training: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Single node training -------------------- .. container:: model-doc pyt_megatron_lm_train_llama-3.3-70b To run the training on a single node for Llama 3.3 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell TOKENIZER_MODEL=meta-llama/Llama-3.3-70B-Instruct \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MBS=2 \ BS=16 \ TE_FP8=0 \ TP=1 \ PP=1 \ FSDP=1 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-3.1-8b To run training on a single node for Llama 3.1 8B FP8, navigate to the Megatron-LM folder and use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=512 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=10 \ GEMM_TUNING=0 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh For Llama 3.1 8B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=512 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=10 \ GEMM_TUNING=1 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=128 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. container:: model-doc pyt_megatron_lm_train_llama-3.1-70b To run the training on a single node for Llama 3.1 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama3.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. To run the training on a single node for Llama 3.1 70B FP8, use the following command. .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ bash examples/llama/train_llama3.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell FP8_WEIGHT_TRANSPOSE_CACHE=0 \ CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ RECOMPUTE=1 \ MBS=3 \ BS=24 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=70 \ FSDP=1 \ TOTAL_ITERS=10 \ NUM_LAYERS=40 \ bash examples/llama/train_llama3.sh .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_llama-2-7b To run training on a single node for Llama 2 7B FP8, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh For Llama 2 7B BF16, use the following command: .. code-block:: shell TEE_OUTPUT=1 \ MBS=4 \ BS=256 \ TP=1 \ TE_FP8=0 \ SEQ_LENGTH=4096 \ MODEL_SIZE=7 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. container:: model-doc pyt_megatron_lm_train_llama-2-70b To run the training on a single node for Llama 2 70B BF16 with FSDP-v2 enabled, add the ``FSDP=1`` argument. For example, use the following command: .. code-block:: shell CKPT_FORMAT=torch_dist \ TEE_OUTPUT=1 \ MBS=7 \ BS=56 \ TP=1 \ TE_FP8=0 \ FSDP=1 \ RECOMPUTE=1 \ SEQ_LENGTH=4096 \ MODEL_SIZE=70 \ TOTAL_ITERS=50 \ bash examples/llama/train_llama2.sh .. note:: It is suggested to use ``TP=1`` when FSDP is enabled for higher throughput. FSDP-v2 is not supported with pipeline parallelism, expert parallelism, MCore's distributed optimizer, gradient accumulation fusion, or FP16. .. container:: model-doc pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 FORCE_BALANCE=true \ RUN_ENV=cluster \ MODEL_SIZE=671B \ TRAIN_ITERS=50 \ SEQ_LEN=4096 \ NUM_LAYERS=3 \ MICRO_BATCH_SIZE=1 GLOBAL_BATCH_SIZE=32 \ PR=bf16 \ TP=1 PP=1 ETP=1 EP=8 \ GEMM_TUNING=1 \ NVTE_CK_USES_BWD_V3=1 \ USE_GROUPED_GEMM=true MOE_USE_LEGACY_GROUPED_GEMM=true \ GPT_LAYER_IN_TE=true \ bash examples/deepseek_v3/train_deepseekv3.sh .. container:: model-doc pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell export NVTE_FUSED_ATTN_CK=0 GEMM_TUNING=1 \ PR=bf16 \ MBS=4 \ AC=none \ SEQ_LEN=4096 \ PAD_LEN=4096 \ TRAIN_ITERS=20 \ bash examples/deepseek_v2/train_deepseekv2.sh .. note:: Note that DeepSeek-V2-Lite is experiencing instability due to GPU memory access fault for large iterations. For stability, it's recommended to use Primus for this workload. See :doc:`../primus-megatron`. .. container:: model-doc pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=0 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=none \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=4096 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x7B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, navigate to the Megatron-LM folder and use the following command. .. code-block:: shell TOKENIZER_MODEL= RECOMPUTE_NUM_LAYERS=4 \ TEE_OUTPUT=1 \ MBS=1 \ GBS=16 \ TP_SIZE=1 \ PP_SIZE=1 \ AC=full \ NUM_LAYERS=4 \ PR=bf16 \ EP_SIZE=8 \ ETP_SIZE=1 \ SEQLEN=8192 \ FORCE_BALANCE=true \ MOCK_DATA=1 \ RUN_ENV=cluster \ MODEL_SIZE=8x22B \ TRAIN_ITERS=50 \ bash examples/mixtral/train_mixtral_moe.sh .. container:: model-doc pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=0 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B For FP8, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ TP=1 \ CP=1 \ PP=1 \ MBS=10 \ BS=640 \ TE_FP8=1 \ MODEL_SIZE=7 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-7B .. container:: model-doc pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell bash examples/qwen/train_qwen2.sh \ FSDP=1 \ CP=1 \ PP=1 \ MBS=3 \ BS=24 \ TE_FP8=0 \ MODEL_SIZE=72 \ SEQ_LENGTH=2048 \ TOTAL_ITERS=50 \ MOCK_DATA=1 \ TOKENIZER_MODEL=Qwen/Qwen2.5-72B \ RECOMPUTE_ACTIVATIONS=full \ CKPT_FORMAT=torch_dist .. _amd-megatron-lm-multi-node-examples: Multi-node training examples ---------------------------- To run training on multiple nodes, launch the Docker container on each node. For example, for Llama 3 using a two node setup (``NODE0`` as the master node), use these commands. * On the master node ``NODE0``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=0 \ bash examples/llama/train_llama3.sh * On the worker node ``NODE1``: .. code-block:: shell TEE_OUTPUT=1 \ MBS=2 \ BS=256 \ TP=1 \ TE_FP8=1 \ SEQ_LENGTH=8192 \ MODEL_SIZE=8 \ MASTER_ADDR=IP_NODE0 \ NNODES=2 \ NODE_RANK=1 \ bash examples/llama/train_llama3.sh Or, for DeepSeek-V3, an example script ``train_deepseek_v3_slurm.sh`` is provided in ``__ to enable training at scale under a SLURM environment. For example, to run training on 16 nodes, try the following command: .. code-block:: shell sbatch examples/deepseek_v3/train_deepseek_v3_slurm.sh .. _amd-megatron-lm-benchmark-test-vars: Key options ----------- The benchmark tests support the following sets of variables. ``TEE_OUTPUT`` ``1`` to enable training logs or ``0`` to disable. ``TE_FP8`` ``0`` for B16 or ``1`` for FP8 -- ``0`` by default. ``GEMM_TUNING`` ``1`` to enable GEMM tuning, which boosts performance by using the best GEMM kernels. ``USE_FLASH_ATTN`` ``1`` to enable Flash Attention. ``FSDP`` ``1`` to enable PyTorch FSDP2. If FSDP is enabled, ``--use-distributed-optimizer``, ``--overlap-param-gather``, and ``--sequence-parallel`` are automatically disabled. ``ENABLE_PROFILING`` ``1`` to enable PyTorch profiling for performance analysis. ``transformer-impl`` ``transformer_engine`` to use the Transformer Engine (TE) or ``local`` to disable TE. ``MODEL_SIZE`` ``8B`` or ``70B`` for Llama 3 and 3.1. ``7B`` or ``70B`` for Llama 2, for example. ``TOTAL_ITERS`` The total number of iterations -- ``10`` by default. ``MOCK_DATA`` ``1`` to use mock data or ``0`` to use real data you provide. ``MBS`` Micro batch size. ``BS`` Global batch size. ``TP`` / ``TP_SIZE`` Tensor parallel (``1``, ``2``, ``4``, ``8``). ``TP`` is disabled when ``FSDP`` is turned on. ``EP`` / ``EP_SIZE`` Expert parallel for MoE models. ``SEQ_LENGTH`` Input sequence length. ``PR`` Precision for training. ``bf16`` for BF16 (default) or ``fp8`` for FP8 GEMMs. ``AC`` Activation checkpointing (``none``, ``sel``, or ``full``) -- ``sel`` by default. ``NUM_LAYERS`` Use reduced number of layers as a proxy model. ``RECOMPUTE_NUM_LAYERS`` Number of layers used for checkpointing recompute. Known issues ============ PyTorch Profiler may produce inaccurate traces when CPU activity profiling is enabled. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ******************************************** Training a model with Primus and Megatron-LM ******************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../primus-megatron` for the latest version. `Primus `__ is a unified and flexible training framework for AMD Instinct GPUs designed to support multiple training engine backends -- including Megatron -- to deliver scalable, high-performance model training. Performance acceleration is powered by `Primus Turbo `__ and ROCm libraries. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/megatron-lm `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including Megatron-LM and :doc:`torchtitan `. Primus with Megatron is designed to replace the :doc:`ROCm Megatron-LM training ` workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`megatron-lm-primus-migration-guide`. AMD provides a ready-to-use Docker images for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components for Primus, ROCm, and Megatron-LM. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-megatron-lm-model-support-v2510: Supported models ================ The following models are pre-optimized for performance on AMD Instinct GPUs. Some instructions, commands, and training examples in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-primus-megatron-lm-training-v2510: Environment setup ================= .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on AMD Instinct GPUs. .. _amd-primus-megatron-lm-requirements-v2510: Pull the Docker image .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set docker = data.docker %} 1. Pull the ``{{ docker.pull_tag }}`` Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ --shm-size 128G \ --name primus_training_env \ {{ docker.pull_tag }} Use these commands if you exit the ``primus_training_env`` container and need to return to it. .. code-block:: shell docker start primus_training_env docker exec -it primus_training_env bash The Docker container hosts verified branch ``release/v25.10`` of the `Primus `__ repository. .. _amd-primus-megatron-lm-environment-setup-v2510: Configuration ============= Primus defines a training configuration in YAML for each model in `examples/megatron/configs `__. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} For example, to update training parameters for {{ model.model }}, you can update ``examples/megatron/configs/{{ model.config_name }}``. Training configuration YAML files for other models follow this naming convention. {% endfor %} {% endfor %} .. note:: See :ref:`Key options ` for more information on configuration options. Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``mock_data`` field to toggle between mock and real data. The default value is ``true`` for enabled. .. code-block:: yaml mock_data: true * If you're using a real dataset, update the ``train_data_path`` field to point to the location of your dataset. .. code-block:: bash mock_data: false train_data_path: /path/to/your/dataset Ensure that the files are accessible inside the Docker container. .. _amd-primus-megatron-lm-tokenizer-v2510: Tokenizer --------- Set the ``HF_TOKEN`` environment variable with right permissions to access the tokenizer for each model. .. code-block:: bash # Export your HF_TOKEN in the workspace export HF_TOKEN= .. note:: In Primus, each model uses a tokenizer from Hugging Face. For example, Llama 3.1 8B model uses ``tokenizer_model: meta-llama/Llama-3.1-8B`` and ``tokenizer_type: Llama3Tokenizer`` defined in the `llama3.1-8B model `__ definition. .. _amd-primus-megatron-lm-run-training-v2510: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on AMD Instinct GPUs using Primus with the Megatron backend. Single node training -------------------- To run training on a single node, navigate to ``/workspace/Primus`` and use the following setup command: .. code-block:: shell pip install -r requirements.txt export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run pre-training for Llama 3.3 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.3_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 6 \ --global_batch_size 48 \ .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.3_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 2 \ --global_batch_size 16 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run pre-training for Llama 3.1 8B FP8, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid \ --micro_batch_size 4 \ --global_batch_size 512 \ .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid For Llama 3.1 8B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 4 \ --global_batch_size 512 \ .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run pre-training for Llama 3.1 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 4 \ --global_batch_size 32 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 To run the training on a single node for Llama 3.1 70B FP8, use the following command. .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid \ --no_fp8_weight_transpose_cache true \ --micro_batch_size 3 \ --global_batch_size 24 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 40 \ --fp8 hybrid \ --no_fp8_weight_transpose_cache true .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run pre-training for Llama 2 7B FP8, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid \ --micro_batch_size 13 \ --global_batch_size 416 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid To run pre-training for Llama 2 7B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 10 \ --global_batch_size 640 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run pre-training for Llama 2 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama2_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 17 \ --global_batch_size 272 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama2_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v3-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V3. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run training on a single node for DeepSeek-V3 (MoE with expert parallel) BF16 with 3-layer proxy, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/deepseek_v3-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 \ --micro_batch_size 8 \ --global_batch_size 64 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/deepseek_v3-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --micro_batch_size 3 \ --global_batch_size 192 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v2-lite-16b Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V2-Lite. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel) BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/deepseek_v2_lite-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 12 \ --global_batch_size 768 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/deepseek_v2_lite-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x7B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run training on a single node for Mixtral 8x7B (MoE with expert parallel), use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 4 \ --global_batch_size 256 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x22b-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x22B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run training on a single node for Mixtral 8x22B BF16 (MoE with expert parallel) 4-layer proxy, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/mixtral_8x22B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 2 \ --global_batch_size 16 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/mixtral_8x22B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 1 \ --global_batch_size 16 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 7B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run training on a single node for Qwen 2.5 7B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 16 \ --global_batch_size 768 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 For FP8, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid --micro_batch_size 20 \ --global_batch_size 800 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 72B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 16 \ --global_batch_size 256 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 .. _amd-primus-megatron-multi-node-examples-v2510: Multi-node training examples ---------------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. To run training on multiple nodes, you can use the `run_slurm_pretrain.sh `__ to launch the multi-node workload. Use the following steps to setup your environment: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set docker = data.docker %} .. code-block:: shell git clone --recurse-submodules https://github.com/AMD-AGI/Primus.git cd Primus git checkout release/v25.10 git submodule update --init --recursive export DOCKER_IMAGE={{ docker.pull_tag }} export HF_TOKEN= export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NCCL_IB_HCA= # specify which RDMA interfaces to use for communication export NCCL_SOCKET_IFNAME= # your Network Interface export GLOO_SOCKET_IFNAME= # your Network Interface export NCCL_IB_GID_INDEX=3 # Set InfiniBand GID index for NCCL communication. Default is 3 for ROCE # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 .. note:: * Make sure correct network drivers are installed on the nodes. If inside a Docker, either install the drivers inside the Docker container or pass the network drivers from the host while creating Docker container. * If ``NCCL_IB_HCA`` and ``NCCL_SOCKET_IFNAME`` are not set, Primus will try to auto-detect. However, since NICs can vary accross different cluster, it is encouraged to explicitly export your NCCL parameters for the cluster. * To find your network interface, you can use ``ip a``. * To find RDMA interfaces, you can use ``ibv_devices`` to get the list of all the RDMA/IB devices. * Remember to set ``DOCKER_IMAGE`` and ``HF_TOKEN`` (see :ref:`amd-primus-megatron-lm-tokenizer-v2510`) as appropriate. .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Llama 3.1 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 1024 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Llama 2 7B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 2048 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Llama 3.1 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 3.1 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Llama 2 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 10 \ --global_batch_size 640 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 2 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 1536 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Llama 3.3 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 3.3 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Mixtral 8x7B BF16 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v2510` to switch to another available model. To train Qwen2.5 72B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 8 \ --global_batch_size 512 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid .. _amd-primus-megatron-lm-benchmark-test-vars-v2510: Key options ----------- The following are key options to take note of fp8 ``hybrid`` enables FP8 GEMMs. use_torch_fsdp2 ``use_torch_fsdp2: 1`` enables torch fsdp-v2. If FSDP is enabled, set ``use_distributed_optimizer`` and ``overlap_param_gather`` to ``false``. profile To enable PyTorch profiling, set these parameters: .. code-block:: yaml profile: true use_pytorch_profiler: true profile_step_end: 7 profile_step_start: 6 train_iters The total number of iterations (default: 50). mock_data True by default. micro_batch_size Micro batch size. global_batch_size Global batch size. recompute_granularity For activation checkpointing. num_layers For using a reduced number of layers as with proxy models. Known issues ============ DeepSeekV3 proxy model and Mixtral 8x22B proxy model may exit with an error due to a memory free issue. However, this does not impacts training runs. All iterations, in this case 50, should have been completed before the exit and the results should be available in the end. Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. This training environment now uses Primus with Megatron as the primary configuration. Limited support for the legacy ROCm Megatron-LM is still available; see the :doc:`../megatron-lm` documentation. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ******************************************** Training a model with Primus and Megatron-LM ******************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../primus-megatron` for the latest version. `Primus `__ is a unified and flexible LLM training framework designed to streamline training. It streamlines LLM training on AMD Instinct GPUs using a modular, reproducible configuration paradigm. Primus is backend-agnostic and supports multiple training engines -- including Megatron. .. note:: Primus with the Megatron backend is intended to replace ROCm Megatron-LM in this Dockerized training environment. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`megatron-lm-primus-migration-guide`. For ease of use, AMD provides a ready-to-use Docker image for MI300 Series GPUs containing essential components for Primus and Megatron-LM. .. note:: This Docker environment is based on Python 3.10 and Ubuntu 22.04. For an alternative environment with Python 3.12 and Ubuntu 24.04, see the :doc:`previous ROCm Megatron-LM v25.6 Docker release `. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-megatron-lm-model-support-v257: Supported models ================ The following models are pre-optimized for performance on AMD Instinct MI300X Series GPUs. Some instructions, commands, and training examples in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.7-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-primus-megatron-lm-training-v257: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the ``{{ docker.pull_tag }}`` image. .. _amd-primus-megatron-lm-requirements-v257: Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ --shm-size 128G \ --name primus_training_env \ {{ docker.pull_tag }} 3. Use these commands if you exit the ``primus_training_env`` container and need to return to it. .. code-block:: shell docker start primus_training_env docker exec -it primus_training_env bash The Docker container hosts verified release tag ``v0.1.0-rc1`` of the `Primus `__ repository. .. _amd-primus-megatron-lm-environment-setup-v257: Configuration ============= Primus defines a training configuration in YAML for each model in `examples/megatron/configs `__. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.7-benchmark-models.yaml {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} To update training parameters for {{ model.model }}, you can update ``examples/megatron/configs/{{ model.config_name }}``. Note that training configuration YAML files for other models follow this naming convention. {% endfor %} {% endfor %} .. note:: See :ref:`Key options ` for more information on configuration options. Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``mock_data`` field to toggle between mock and real data. The default value is ``true`` for enabled. .. code-block:: yaml mock_data: true * If you're using a real dataset, update the ``train_data_path`` field to point to the location of your dataset. .. code-block:: bash mock_data: false train_data_path: /path/to/your/dataset Ensure that the files are accessible inside the Docker container. .. _amd-primus-megatron-lm-tokenizer-v257: Tokenizer --------- In Primus, each model uses a tokenizer from Hugging Face. For example, Llama 3.1 8B model uses ``tokenizer_model: meta-llama/Llama-3.1-8B`` and ``tokenizer_type: Llama3Tokenizer`` defined in the `llama3.1-8B model `__ definition. As such, you need to set the ``HF_TOKEN`` environment variable with right permissions to access the tokenizer for each model. .. code-block:: bash # Export your HF_TOKEN in the workspace export HF_TOKEN= .. _amd-primus-megatron-lm-run-training-v257: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X Series GPUs with the AMD Megatron-LM environment. Single node training -------------------- To run training on a single node, navigate to ``/workspace/Primus`` and use the following setup command: .. code-block:: shell pip install -r requirements.txt export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 Once setup is complete, run the appropriate training command. .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b To run pre-training for Llama 3.3 70B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 16 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b To run pre-training for Llama 3.1 8B FP8, run: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid For Llama 3.1 8B BF16, use the following command: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b To run pre-training for Llama 3.1 70B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 To run the training on a single node for Llama 3.1 70B FP8 with proxy, use the following command: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 40 \ --fp8 hybrid \ --no_fp8_weight_transpose_cache true .. note:: Use two or more nodes to run the *full* Llama 70B model with FP8 precision. .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b To run pre-training for Llama 2 7B FP8, run: .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid To run pre-training for Llama 2 7B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b To run pre-training for Llama 2 70B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v3-proxy To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, use the following command: .. code-block:: shell EXP=examples/megatron/configs/deepseek_v3-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v2-lite-16b To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), use the following command: .. code-block:: shell EXP=examples/megatron/configs/deepseek_v2_lite-pretrain.yaml \ bash examples/run_pretrain.sh \ --global_batch_size 256 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b To run training on a single node for Mixtral 8x7B (MoE with expert parallel), use the following command: .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x22b-proxy To run training on a single node for Mixtral 8x7B (MoE with expert parallel) with 4-layer proxy, use the following command: .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x22B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 1 \ --global_batch_size 16 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-7b To run training on a single node for Qwen 2.5 7B BF16, use the following command: .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh --train_iters 50 For FP8, use the following command. .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh --train_iters 50 Multi-node training examples ---------------------------- To run training on multiple nodes, you can use the `run_slurm_pretrain.sh `__ to launch the multi-node workload. Use the following steps to setup your environment: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} .. code-block:: shell cd /workspace/Primus/ export DOCKER_IMAGE={{ docker.pull_tag }} export HF_TOKEN= export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NCCL_IB_HCA= # specify which RDMA interfaces to use for communication export NCCL_SOCKET_IFNAME= # your Network Interface export GLOO_SOCKET_IFNAME= # your Network Interface export NCCL_IB_GID_INDEX=3 # Set InfiniBand GID index for NCCL communication. Default is 3 for ROCE .. note:: * Make sure correct network drivers are installed on the nodes. If inside a Docker, either install the drivers inside the Docker container or pass the network drivers from the host while creating Docker container. * If ``NCCL_IB_HCA`` and ``NCCL_SOCKET_IFNAME`` are not set, Primus will try to auto-detect. However, since NICs can vary accross different cluster, it is encouraged to explicitly export your NCCL parameters for the cluster. * To find your network interface, you can use ``ip a``. * To find RDMA interfaces, you can use ``ibv_devices`` to get the list of all the RDMA/IB devices. .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b To train Llama 3.3 70B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 3.3 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b To train Llama 3.1 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. For e.g., `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 1024 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b To train Llama 3.1 70B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 3.1 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b To train Llama 2 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. For e.g., `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 EXP=examples/megatron/configs/llama2_7B-pretrain.yaml bash ./examples/run_slurm_pretrain.sh --global_batch_size 2048 --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b To train Llama 2 70B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 10 \ --global_batch_size 640 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 2 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 1536 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b To train Mixtral 8x7B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b To train Qwen2.5 72B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 8 \ --global_batch_size 512 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid .. _amd-primus-megatron-lm-benchmark-test-vars-v257: Key options ----------- The following are key options to take note of fp8 ``hybrid`` enables FP8 GEMMs. use_torch_fsdp2 ``use_torch_fsdp2: 1`` enables torch fsdp-v2. If FSDP is enabled, set ``use_distributed_optimizer`` and ``overlap_param_gather`` to ``false``. profile To enable PyTorch profiling, set these parameters: .. code-block:: yaml profile: true use_pytorch_profiler: true profile_step_end: 7 profile_step_start: 6 train_iters The total number of iterations (default: 50). mock_data True by default. micro_batch_size Micro batch size. global_batch_size Global batch size. recompute_granularity For activation checkpointing. num_layers For using a reduced number of layers as with proxy models. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ******************************************** Training a model with Primus and Megatron-LM ******************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../primus-megatron` for the latest version. `Primus `__ is a unified and flexible LLM training framework designed to streamline training. It streamlines LLM training on AMD Instinct GPUs using a modular, reproducible configuration paradigm. Primus is backend-agnostic and supports multiple training engines -- including Megatron. .. note:: Primus with Megatron is designed to replace the :doc:`ROCm Megatron-LM training <../megatron-lm>` workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`megatron-lm-primus-migration-guide`. For ease of use, AMD provides a ready-to-use Docker image for MI300 series GPUs containing essential components for Primus and Megatron-LM. This Docker is powered by Primus Turbo optimizations for performance; this release adds support for Primus Turbo with optimized attention and grouped GEMM kernels. .. note:: This Docker environment is based on Python 3.10 and Ubuntu 22.04. For an alternative environment with Python 3.12 and Ubuntu 24.04, see the :doc:`previous ROCm Megatron-LM v25.6 Docker release `. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-megatron-lm-model-support: Supported models ================ The following models are pre-optimized for performance on AMD Instinct MI300X series GPUs. Some instructions, commands, and training examples in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.8-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-primus-megatron-lm-training: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} Environment setup ================= Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X series GPUs with the ``{{ docker.pull_tag }}`` image. .. _amd-primus-megatron-lm-requirements: Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ --shm-size 128G \ --name primus_training_env \ {{ docker.pull_tag }} 3. Use these commands if you exit the ``primus_training_env`` container and need to return to it. .. code-block:: shell docker start primus_training_env docker exec -it primus_training_env bash The Docker container hosts verified commit ``927a717`` of the `Primus `__ repository. .. _amd-primus-megatron-lm-environment-setup: Configuration ============= Primus defines a training configuration in YAML for each model in `examples/megatron/configs `__. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.8-benchmark-models.yaml {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} To update training parameters for {{ model.model }}, you can update ``examples/megatron/configs/{{ model.config_name }}``. Note that training configuration YAML files for other models follow this naming convention. {% endfor %} {% endfor %} .. note:: See :ref:`Key options ` for more information on configuration options. Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``mock_data`` field to toggle between mock and real data. The default value is ``true`` for enabled. .. code-block:: yaml mock_data: true * If you're using a real dataset, update the ``train_data_path`` field to point to the location of your dataset. .. code-block:: bash mock_data: false train_data_path: /path/to/your/dataset Ensure that the files are accessible inside the Docker container. .. _amd-primus-megatron-lm-tokenizer: Tokenizer --------- Set the ``HF_TOKEN`` environment variable with right permissions to access the tokenizer for each model. .. code-block:: bash # Export your HF_TOKEN in the workspace export HF_TOKEN= .. note:: In Primus, each model uses a tokenizer from Hugging Face. For example, Llama 3.1 8B model uses ``tokenizer_model: meta-llama/Llama-3.1-8B`` and ``tokenizer_type: Llama3Tokenizer`` defined in the `llama3.1-8B model `__ definition. .. _amd-primus-megatron-lm-run-training: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on MI300X series GPUs with the AMD Megatron-LM environment. Single node training -------------------- To run training on a single node, navigate to ``/workspace/Primus`` and use the following setup command: .. code-block:: shell pip install -r requirements.txt export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run pre-training for Llama 3.3 70B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 16 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run pre-training for Llama 3.1 8B FP8, run: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid For Llama 3.1 8B BF16, use the following command: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run pre-training for Llama 3.1 70B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 To run the training on a single node for Llama 3.1 70B FP8 with proxy, use the following command: .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 40 \ --fp8 hybrid .. note:: Use two or more nodes to run the *full* Llama 70B model with FP8 precision. .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run pre-training for Llama 2 7B FP8, run: .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid To run pre-training for Llama 2 7B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run pre-training for Llama 2 70B BF16, run: .. code-block:: shell EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v3-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V3. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run training on a single node for DeepSeek-V3 (MoE with expert parallel) with 3-layer proxy, use the following command: .. code-block:: shell EXP=examples/megatron/configs/deepseek_v3-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v2-lite-16b Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V2-Lite. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel), use the following command: .. code-block:: shell EXP=examples/megatron/configs/deepseek_v2_lite-pretrain.yaml \ bash examples/run_pretrain.sh \ --global_batch_size 256 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x7B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run training on a single node for Mixtral 8x7B (MoE with expert parallel), use the following command: .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x22b-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x22B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run training on a single node for Mixtral 8x22B (MoE with expert parallel) with 4-layer proxy, use the following command: .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x22B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 1 \ --global_batch_size 16 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 7B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run training on a single node for Qwen 2.5 7B BF16, use the following command: .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh --train_iters 50 For FP8, use the following command. .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 72B. See :ref:`amd-primus-megatron-lm-model-support` to switch to another available model. To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh --train_iters 50 .. _amd-primus-megatron-multi-node-examples: Multi-node training examples ---------------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. To run training on multiple nodes, you can use the `run_slurm_pretrain.sh `__ to launch the multi-node workload. Use the following steps to setup your environment: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} .. code-block:: shell cd /workspace/Primus/ export DOCKER_IMAGE={{ docker.pull_tag }} export HF_TOKEN= export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NCCL_IB_HCA= # specify which RDMA interfaces to use for communication export NCCL_SOCKET_IFNAME= # your Network Interface export GLOO_SOCKET_IFNAME= # your Network Interface export NCCL_IB_GID_INDEX=3 # Set InfiniBand GID index for NCCL communication. Default is 3 for ROCE .. note:: * Make sure correct network drivers are installed on the nodes. If inside a Docker, either install the drivers inside the Docker container or pass the network drivers from the host while creating Docker container. * If ``NCCL_IB_HCA`` and ``NCCL_SOCKET_IFNAME`` are not set, Primus will try to auto-detect. However, since NICs can vary accross different cluster, it is encouraged to explicitly export your NCCL parameters for the cluster. * To find your network interface, you can use ``ip a``. * To find RDMA interfaces, you can use ``ibv_devices`` to get the list of all the RDMA/IB devices. .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b To train Llama 3.3 70B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --fp8 hybrid To train Llama 3.3 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b To train Llama 3.1 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. For e.g., `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 1024 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b To train Llama 3.1 70B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --fp8 hybrid To train Llama 3.1 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b To train Llama 2 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. For e.g., `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 EXP=examples/megatron/configs/llama2_7B-pretrain.yaml bash ./examples/run_slurm_pretrain.sh --global_batch_size 2048 --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b To train Llama 2 70B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --fp8 hybrid To train Llama 2 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 1536 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b To train Mixtral 8x7B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b To train Qwen2.5 72B FP8 on 8 nodes, run: .. code-block:: shell NNODES=8 EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --fp8 hybrid .. _amd-primus-megatron-lm-benchmark-test-vars: Key options ----------- The following are key options to take note of fp8 ``hybrid`` enables FP8 GEMMs. use_torch_fsdp2 ``use_torch_fsdp2: 1`` enables torch fsdp-v2. If FSDP is enabled, set ``use_distributed_optimizer`` and ``overlap_param_gather`` to ``false``. profile To enable PyTorch profiling, set these parameters: .. code-block:: yaml profile: true use_pytorch_profiler: true profile_step_end: 7 profile_step_start: 6 train_iters The total number of iterations (default: 50). mock_data True by default. micro_batch_size Micro batch size. global_batch_size Global batch size. recompute_granularity For activation checkpointing. num_layers For using a reduced number of layers as with proxy models. Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. This training environment now uses Primus with Megatron as the primary configuration. Limited support for the legacy ROCm Megatron-LM is still available; see the :doc:`../megatron-lm` documentation. --- :orphan: .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ******************************************** Training a model with Primus and Megatron-LM ******************************************** .. caution:: This documentation does not reflect the latest version of ROCm Megatron-LM training performance documentation. See :doc:`../primus-megatron` for the latest version. `Primus `__ is a unified and flexible training framework for AMD Instinct GPUs designed to support multiple training engine backends -- including Megatron -- to deliver scalable, high-performance model training. Performance acceleration is powered by `Primus Turbo `__ and ROCm libraries. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/megatron-lm `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including Megatron-LM and :doc:`torchtitan <../primus-pytorch>`. Primus with Megatron is designed to replace the :doc:`ROCm Megatron-LM training <../megatron-lm>` workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`megatron-lm-primus-migration-guide`. AMD provides a ready-to-use Docker images for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components for Primus, ROCm, and Megatron-LM. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. _amd-primus-megatron-lm-model-support-v259: Supported models ================ The following models are pre-optimized for performance on AMD Instinct GPUs. Some instructions, commands, and training examples in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.9-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-primus-megatron-lm-training-v259: Environment setup ================= .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.9-benchmark-models.yaml Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on AMD Instinct GPUs. .. _amd-primus-megatron-lm-requirements-v259: Pull the Docker image .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} 1. Pull the appropriate Docker image for your AMD GPU architecture from Docker Hub. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} 2. Launch the Docker container. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ --shm-size 128G \ --name primus_training_env \ {{ docker.pull_tag }} {% endfor %} 3. Use these commands if you exit the ``primus_training_env`` container and need to return to it. .. code-block:: shell docker start primus_training_env docker exec -it primus_training_env bash The Docker container hosts verified commit ``e16b27b`` of the `Primus `__ repository. .. _amd-primus-megatron-lm-environment-setup-v259: Configuration ============= Primus defines a training configuration in YAML for each model in `examples/megatron/configs `__. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.9-benchmark-models.yaml {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} For example, to update training parameters for {{ model.model }}, you can update ``examples/megatron/configs/{{ model.config_name }}``. Training configuration YAML files for other models follow this naming convention. {% endfor %} {% endfor %} .. note:: See :ref:`Key options ` for more information on configuration options. Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``mock_data`` field to toggle between mock and real data. The default value is ``true`` for enabled. .. code-block:: yaml mock_data: true * If you're using a real dataset, update the ``train_data_path`` field to point to the location of your dataset. .. code-block:: bash mock_data: false train_data_path: /path/to/your/dataset Ensure that the files are accessible inside the Docker container. .. _amd-primus-megatron-lm-tokenizer-v259: Tokenizer --------- Set the ``HF_TOKEN`` environment variable with right permissions to access the tokenizer for each model. .. code-block:: bash # Export your HF_TOKEN in the workspace export HF_TOKEN= .. note:: In Primus, each model uses a tokenizer from Hugging Face. For example, Llama 3.1 8B model uses ``tokenizer_model: meta-llama/Llama-3.1-8B`` and ``tokenizer_type: Llama3Tokenizer`` defined in the `llama3.1-8B model `__ definition. .. _amd-primus-megatron-lm-run-training-v259: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on AMD Instinct GPUs using Primus with the Megatron backend. Single node training -------------------- To run training on a single node, navigate to ``/workspace/Primus`` and use the following setup command: .. code-block:: shell pip install -r requirements.txt export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run pre-training for Llama 3.3 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 6 \ --global_batch_size 48 \ .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 2 \ --global_batch_size 16 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run pre-training for Llama 3.1 8B FP8, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid \ --micro_batch_size 4 \ --global_batch_size 512 \ .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid For Llama 3.1 8B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 4 \ --global_batch_size 512 \ .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run pre-training for Llama 3.1 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 4 \ --global_batch_size 32 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 To run the training on a single node for Llama 3.1 70B FP8, use the following command. .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid \ --no_fp8_weight_transpose_cache true \ --micro_batch_size 3 \ --global_batch_size 24 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 40 \ --fp8 hybrid \ --no_fp8_weight_transpose_cache true .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run pre-training for Llama 2 7B FP8, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid \ --micro_batch_size 13 \ --global_batch_size 416 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid To run pre-training for Llama 2 7B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 10 \ --global_batch_size 640 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run pre-training for Llama 2 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 17 \ --global_batch_size 272 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v3-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V3. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run training on a single node for DeepSeek-V3 (MoE with expert parallel) BF16 with 3-layer proxy, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/deepseek_v3-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 \ --micro_batch_size 8 \ --global_batch_size 64 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/deepseek_v3-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v2-lite-16b Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V2-Lite. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel) BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/deepseek_v2_lite-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 12 \ --global_batch_size 768 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/deepseek_v2_lite-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x7B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run training on a single node for Mixtral 8x7B (MoE with expert parallel), use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 4 \ --global_batch_size 256 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x22b-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x22B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run training on a single node for Mixtral 8x22B BF16 (MoE with expert parallel) 4-layer proxy, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x22B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 2 \ --global_batch_size 16 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/mixtral_8x22B_v0.1-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 1 \ --global_batch_size 16 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 7B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run training on a single node for Qwen 2.5 7B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 16 \ --global_batch_size 768 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 For FP8, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid --micro_batch_size 20 \ --global_batch_size 800 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_7B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 72B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 16 \ --global_batch_size 256 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 .. _amd-primus-megatron-multi-node-examples-v259: Multi-node training examples ---------------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. To run training on multiple nodes, you can use the `run_slurm_pretrain.sh `__ to launch the multi-node workload. Use the following steps to setup your environment: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-megatron-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell git clone --recurse-submodules https://github.com/AMD-AGI/Primus.git cd Primus git checkout e16b27b export DOCKER_IMAGE={{ docker.pull_tag }} export HF_TOKEN= export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NCCL_IB_HCA= # specify which RDMA interfaces to use for communication export NCCL_SOCKET_IFNAME= # your Network Interface export GLOO_SOCKET_IFNAME= # your Network Interface export NCCL_IB_GID_INDEX=3 # Set InfiniBand GID index for NCCL communication. Default is 3 for ROCE {% endfor %} .. note:: * Make sure correct network drivers are installed on the nodes. If inside a Docker, either install the drivers inside the Docker container or pass the network drivers from the host while creating Docker container. * If ``NCCL_IB_HCA`` and ``NCCL_SOCKET_IFNAME`` are not set, Primus will try to auto-detect. However, since NICs can vary accross different cluster, it is encouraged to explicitly export your NCCL parameters for the cluster. * To find your network interface, you can use ``ip a``. * To find RDMA interfaces, you can use ``ibv_devices`` to get the list of all the RDMA/IB devices. * Remember to set ``DOCKER_IMAGE`` and ``HF_TOKEN`` (see :ref:`amd-primus-megatron-lm-tokenizer-v259`) as appropriate. .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Llama 3.1 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama3.1_8B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 1024 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Llama 2 7B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama2_7B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 2048 \ --fp8 hybrid .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Llama 3.1 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 3.1 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/llama3.1_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Llama 2 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 10 \ --global_batch_size 640 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 2 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/llama2_70B-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 1536 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Llama 3.3 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid To train Llama 3.3 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/llama3.3_70B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Mixtral 8x7B BF16 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/mixtral_8x7B_v0.1-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v259` to switch to another available model. To train Qwen2.5 72B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/qwen2.5_72B-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 8 \ --global_batch_size 512 \ --recompute_num_layers 80 \ --no_fp8_weight_transpose_cache true \ --fp8 hybrid .. _amd-primus-megatron-lm-benchmark-test-vars-v259: Key options ----------- The following are key options to take note of fp8 ``hybrid`` enables FP8 GEMMs. use_torch_fsdp2 ``use_torch_fsdp2: 1`` enables torch fsdp-v2. If FSDP is enabled, set ``use_distributed_optimizer`` and ``overlap_param_gather`` to ``false``. profile To enable PyTorch profiling, set these parameters: .. code-block:: yaml profile: true use_pytorch_profiler: true profile_step_end: 7 profile_step_start: 6 train_iters The total number of iterations (default: 50). mock_data True by default. micro_batch_size Micro batch size. global_batch_size Global batch size. recompute_granularity For activation checkpointing. num_layers For using a reduced number of layers as with proxy models. Known issues ============ PyTorch Profiler may produce inaccurate traces when CPU activity profiling is enabled. Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. This training environment now uses Primus with Megatron as the primary configuration. Limited support for the legacy ROCm Megatron-LM is still available; see the :doc:`../megatron-lm` documentation. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker **************************************** Training a model with Primus and PyTorch **************************************** .. caution:: This documentation does not reflect the latest version of ROCm Primus PyTorch training performance benchmark documentation. See :doc:`../primus-pytorch` for the latest version. `Primus `__ is a unified and flexible LLM training framework designed to streamline training. It streamlines LLM training on AMD Instinct GPUs using a modular, reproducible configuration paradigm. Primus now supports the PyTorch torchtitan backend. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/pytorch-training `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including torchtitan and :doc:`Megatron-LM `. Primus with the PyTorch torchtitan backend is designed to replace the :doc:`ROCm PyTorch training ` workflow. See :doc:`pytorch-training` to see steps to run workloads without Primus. AMD provides a ready-to-use Docker image for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components for Primus and PyTorch training with Primus Turbo optimizations. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-pytorch-model-support-v2510: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. seealso:: For additional workloads, including Llama 3.3, Llama 3.2, Llama 2, GPT OSS, Qwen, and Flux models, see the documentation :doc:`pytorch-training` (without Primus) .. _amd-primus-pytorch-performance-measurements-v2510: System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ data.docker.pull_tag }} Run training ============ Once the setup is complete, choose between the following two workflows to start benchmarking training. For fine-tuning workloads and multi-node training examples, see :doc:`pytorch-training` (without Primus). For best performance on MI325X, MI350X, and MI355X GPUs, you might need to tweak some configurations (such as batch sizes). .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml {% set docker = data.docker %} {% set model_groups = data.model_groups %} .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v2510` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Primus benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run commands are tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v2510` to switch to another available model. .. rubric:: Download the Docker image and required packages 1. Pull the ``{{ docker.pull_tag }}`` Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Run the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash .. rubric:: Prepare training datasets and dependencies The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token .. rubric:: Pretraining To get started, navigate to the ``Primus`` directory in your container. .. code-block:: cd /workspace/Primus Now, to start the pretraining benchmark, use the ``run_pretrain.sh`` script included with Primus with the appropriate options. .. rubric:: Benchmarking examples .. container:: model-doc primus_pyt_train_llama-3.1-8b Use the following command to run train Llama 3.1 8B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 6 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 6 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 4 To train Llama 3.1 8B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 8 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 7 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 5 .. container:: model-doc primus_pyt_train_llama-3.1-70b Use the following command to run train Llama 3.1 70B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 8 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 6 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 4 To train Llama 3.1 70B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 6 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 5 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 3 .. container:: model-doc primus_pyt_train_deepseek-v2 Use the following command to run train DeepSeek V2 16B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 16 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 10 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 8 To train DeepSeek V2 16B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 16 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 8 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 8 {% endfor %} {% endfor %} Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker **************************************** Training a model with Primus and PyTorch **************************************** .. caution:: This documentation does not reflect the latest version of ROCm Primus PyTorch training performance benchmark documentation. See :doc:`../primus-pytorch` for the latest version. `Primus `__ is a unified and flexible LLM training framework designed to streamline training. It streamlines LLM training on AMD Instinct GPUs using a modular, reproducible configuration paradigm. Primus now supports the PyTorch torchtitan backend. .. note:: Primus with the PyTorch torchtitan backend is designed to replace the :doc:`ROCm PyTorch training <../pytorch-training>` workflow. See :doc:`../pytorch-training` to see steps to run workloads without Primus. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} For ease of use, AMD provides a ready-to-use Docker image -- ``{{ docker.pull_tag }}`` -- for MI300X series GPUs containing essential components for Primus and PyTorch training with Primus Turbo optimizations. .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-pytorch-model-support-v258: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.8-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. seealso:: For additional workloads, including Llama 3.3, Llama 3.2, Llama 2, GPT OSS, Qwen, and Flux models, see the documentation :doc:`../pytorch-training` (without Primus) .. _amd-primus-pytorch-performance-measurements-v258: System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.8-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} Pull the Docker image ===================== Use the following command to pull the `Docker image <{{ unified_docker.docker_hub_url }}>`_ from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Run training ============ {% set model_groups = data.model_groups %} Once the setup is complete, choose between the following two workflows to start benchmarking training. For fine-tuning workloads and multi-node training examples, see :doc:`../pytorch-training` (without Primus). .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v258` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run commands are tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v258` to switch to another available model. .. rubric:: Download the Docker image and required packages 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} 2. Run the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ unified_docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. rubric:: Pretraining To start the pretraining benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "primus_pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Currently, only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. .. rubric:: Benchmarking examples Use the following command to run train {{ model.model }} with BF16 precision using Primus torchtitan. .. code-block:: shell ./pytorch_benchmark_report.sh -m {{ model.model_repo }} To train {{ model.model }} with FP8 precision, use the following command. .. code-block:: shell ./pytorch_benchmark_report.sh -m {{ model.model_repo }} -p FP8 {% endfor %} {% endfor %} Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker **************************************** Training a model with Primus and PyTorch **************************************** .. caution:: This documentation does not reflect the latest version of ROCm Primus PyTorch training performance benchmark documentation. See :doc:`../primus-pytorch` for the latest version. `Primus `__ is a unified and flexible LLM training framework designed to streamline training. It streamlines LLM training on AMD Instinct GPUs using a modular, reproducible configuration paradigm. Primus now supports the PyTorch torchtitan backend. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/pytorch-training `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including torchtitan and :doc:`Megatron-LM <../primus-megatron>`. Primus with the PyTorch torchtitan backend is designed to replace the :doc:`ROCm PyTorch training <../pytorch-training>` workflow. See :doc:`../pytorch-training` to see steps to run workloads without Primus. AMD provides a ready-to-use Docker image for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components for Primus and PyTorch training with Primus Turbo optimizations. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. _amd-primus-pytorch-model-support-v259: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.9-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. seealso:: For additional workloads, including Llama 3.3, Llama 3.2, Llama 2, GPT OSS, Qwen, and Flux models, see the documentation :doc:`../pytorch-training` (without Primus) .. _amd-primus-pytorch-performance-measurements-v259: System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} Use the following command to pull the Docker image from Docker Hub. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} Run training ============ Once the setup is complete, choose between the following two workflows to start benchmarking training. For fine-tuning workloads and multi-node training examples, see :doc:`../pytorch-training` (without Primus). For best performance on MI325X, MI350X, and MI355X GPUs, you might need to tweak some configurations (such as batch sizes). .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/primus-pytorch-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} {% set model_groups = data.model_groups %} .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v259` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. .. note:: Currently, Primus torchtitan models are run with Primus Turbo enabled for enhanced performance. To disable Primus Turbo, modify respective configuration file ``scripts/primus/pytorch_train/primus_torchtitan_scripts/llama3_[8B|70B]-[BF16|FP8].yaml``. {% endfor %} {% endfor %} .. tab-item:: Primus benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run commands are tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v259` to switch to another available model. .. rubric:: Download the Docker image and required packages 1. Pull the appropriate Docker image for your AMD GPU architecture from Docker Hub. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} 2. Run the Docker container. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} {% endfor %} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash .. rubric:: Prepare training datasets and dependencies The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token .. rubric:: Pretraining To get started, navigate to the ``Primus`` directory in your container. .. code-block:: cd /workspace/Primus Now, to start the pretraining benchmark, use the ``run_pretrain.sh`` script included with Primus with the appropriate options. .. rubric:: Benchmarking examples .. container:: model-doc primus_pyt_train_llama-3.1-8b Use the following command to run train Llama 3.1 8B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 5 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 6 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 4 To train Llama 3.1 8B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 8 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_8B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 7 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_8B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 5 .. container:: model-doc primus_pyt_train_llama-3.1-70b Use the following command to run train Llama 3.1 70B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 8 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 6 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 4 To train Llama 3.1 70B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 6 .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 5 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh \ --metrics.enable_tensorboard false \ --profiling.enable_profiling false \ --training.batch_size 3 {% endfor %} {% endfor %} .. tab-item:: Standalone torchtitan benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run commands are tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v259` to switch to another available model. .. rubric:: Download the Docker image and required packages 1. Pull the appropriate Docker image for your AMD GPU architecture from Docker Hub. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} 2. Run the Docker container. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} {% endfor %} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. Navigate to the ``torchtitan`` workspace directory. .. code-block:: shell cd /workspace/torchtitan .. rubric:: Download the tokenizer 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Download the tokenizer for your model. .. container:: model-doc {{ model.mad_tag }} .. code-block:: shell python3 scripts/download_tokenizer.py \ --repo_id {{ model.model_repo }} \ --tokenizer_path "original" \ --hf_token=${HF_TOKEN} .. rubric:: Pretraining examples Run the training script with the appropriate configuration file. For train with BF16 precicion, use the following command: .. container:: model-doc {{ model.mad_tag }} .. code-block:: shell CONFIG_FILE={{ model.config_file.bf16 }} \ .run_train.sh For train with BF16 precicion, use the following command: .. container:: model-doc {{ model.mad_tag }} .. code-block:: shell CONFIG_FILE={{ model.config_file.fp8 }} \ .run_train.sh {% endfor %} {% endfor %} Known issues ============ PyTorch Profiler may produce inaccurate traces when CPU activity profiling is enabled. Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: **************************************************** PyTorch training performance testing version history **************************************************** This table lists previous versions of the ROCm PyTorch training Docker image for inference performance testing. For detailed information about available models for benchmarking, see the version-specific documentation. You can find tagged previous releases of the ``ROCm/pytorch-training`` Docker image on `Docker Hub `_. .. list-table:: :header-rows: 1 * - Image version - Components - Resources * - v25.11 - * ROCm 7.1.0 * PyTorch 2.10.0.dev20251112+rocm7.1 - * :doc:`Primus PyTorch Training documentation <../primus-pytorch>` * :doc:`PyTorch training (legacy) documentation <../pytorch-training>` * `Docker Hub `__ * - v25.10 - * ROCm 7.1.0 * PyTorch 2.10.0.dev20251112+rocm7.1 - * :doc:`Primus PyTorch Training documentation ` * :doc:`PyTorch training (legacy) documentation ` * `Docker Hub `__ * - v25.9 - * ROCm 7.0.0 * Primus 0.3.0 * PyTorch 2.9.0.dev20250821+rocm7.0.0.lw.git125803b7 - * :doc:`Primus PyTorch Training documentation ` * :doc:`PyTorch training (legacy) documentation ` * `Docker Hub (gfx950) `__ * `Docker Hub (gfx942) `__ * - v25.8 - * ROCm 6.4.3 * PyTorch 2.8.0a0+gitd06a406 - * :doc:`Primus PyTorch Training documentation ` * :doc:`PyTorch training (legacy) documentation ` * `Docker Hub `__ * - v25.7 - * ROCm 6.4.2 * PyTorch 2.8.0a0+gitd06a406 - * :doc:`Documentation ` * `Docker Hub `__ * - v25.6 - * ROCm 6.3.4 * PyTorch 2.8.0a0+git7d205b2 - * :doc:`Documentation ` * `Docker Hub `__ * - v25.5 - * ROCm 6.3.4 * PyTorch 2.7.0a0+git637433 - * :doc:`Documentation ` * `Docker Hub `__ * - v25.4 - * ROCm 6.3.0 * PyTorch 2.7.0a0+git637433 - * :doc:`Documentation ` * `Docker Hub `__ * - v25.3 - * ROCm 6.3.0 * PyTorch 2.7.0a0+git637433 - * :doc:`Documentation ` * `Docker Hub `__ --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch on ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm PyTorch training performance benchmark documentation. See :doc:`../pytorch-training` for the latest version. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/pytorch-training `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including torchtitan and :doc:`Megatron-LM <../primus-megatron>`. See :doc:`../primus-pytorch` for details. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The PyTorch for ROCm training Docker image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-pytorch-training-model-support-v2510: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI355X, MI350X, MI325X, and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _amd-pytorch-training-supported-training-modes-v2510: The following table lists supported training modes per model. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml {% set model_groups = data.model_groups %} .. dropdown:: Supported training modes .. list-table:: :header-rows: 1 * - Model - Supported training modes {% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if model.training_modes %} * - {{ model.model }} - ``{{ model.training_modes | join('``, ``') }}`` {% endif %} {% endfor %} {% endfor %} .. note:: Some model and fine-tuning combinations are not listed. This is because the `upstream torchtune repository `__ doesn't provide default YAML configurations for them. For advanced usage, you can create a custom configuration to enable unlisted fine-tuning methods by using an existing file in the ``/workspace/torchtune/recipes/configs`` directory as a template. .. _amd-pytorch-training-performance-measurements-v2510: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Run training ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml {% set docker = data.docker %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to start benchmarking training: .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support-v2510` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following commands are tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support-v2510` to switch to another available model. {% endfor %} {% endfor %} .. rubric:: Download the Docker image and required packages 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. container:: model-doc pyt_train_llama-3.1-8b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 8B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 .. container:: model-doc pyt_train_llama-3.1-70b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 70B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - `TorchData `__ * - ``tomli`` - `Tomli `__ * - ``tiktoken`` - `tiktoken `__ * - ``blobfile`` - `blobfile `__ * - ``tabulate`` - `tabulate `__ * - ``wandb`` - `Weights & Biases `__ * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 .. container:: model-doc pyt_train_flux ``pytorch_benchmark_setup.sh`` installs the following libraries for FLUX: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `__ 3.2.0 * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 * - ``csvkit`` - `csvkit `__ 2.0.1 * - ``deepspeed`` - `DeepSpeed `__ 0.16.2 * - ``diffusers`` - `Hugging Face Diffusers `__ 0.31.0 * - ``GitPython`` - `GitPython `__ 3.1.44 * - ``opencv-python-headless`` - `opencv-python-headless `__ 4.10.0.84 * - ``peft`` - `PEFT `__ 0.14.0 * - ``protobuf`` - `Protocol Buffers `__ 5.29.2 * - ``pytest`` - `PyTest `__ 8.3.4 * - ``python-dotenv`` - `python-dotenv `__ 1.0.1 * - ``seaborn`` - `Seaborn `__ 0.13.2 * - ``transformers`` - `Transformers `__ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following datasets from Hugging Face: * `frank-chieng/chinese_architecture_siheyuan `__ {% for model_group in model_groups %} {% for model in model_group.models %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "pretrain": "Benchmark pre-training.", "HF_pretrain": "Llama 3.1 8B pre-training with FP8 precision." } %} {% set available_modes = training_modes | select("in", ["pretrain", "HF_pretrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Pretraining To start the pre-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. {% if model.mad_tag == "pyt_train_dlrm" %} 1. Go to the DLRM directory. .. code-block:: shell cd /workspace/DLRMBenchmark 2. To run the single node training benchmark for DLRM-v2 with TF32 precision, run the following script. .. code-block:: shell ./launch_training_single_node.sh To run with MAD within the Docker container, use the following command. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -m DLRM {% else %} .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length {% if model.mad_tag == "pyt_train_flux" %} .. container:: model-doc {{ model.mad_tag }} .. note:: Currently, FLUX models are not supported out-of-the-box on this Docker. To use FLUX, refer to ``rocm/pytorch-training`` Docker: :doc:`pytorch-training-v25.6` Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. {% endif %} .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% endif %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "posttrain": "Benchmark post-training.", } %} {% set available_modes = training_modes | select("in", ["posttrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Post-training To start the post-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% set training_mode_descs = { "finetune_fw": "Full weight fine-tuning (BF16 and FP8 supported).", "finetune_lora": "LoRA fine-tuning (BF16 supported).", "finetune_qlora": "QLoRA fine-tuning (BF16 supported).", "HF_finetune_lora": "LoRA fine-tuning with Hugging Face PEFT.", } %} {% set available_modes = training_modes | select("in", ["finetune_fw", "finetune_lora", "finetune_qlora", "HF_finetune_lora"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. See :ref:`supported training modes `. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if "finetune_fw" in available_modes %} or ``FP8``{% endif %} - All models support BF16.{% if "finetune_fw" in available_modes %} FP8 is only available for full weight fine-tuning.{% endif %} * - ``$sequence_length`` - Between 2048 and 16384. - Sequence length for the language model. {% if model.mad_tag in ["pyt_train_llama3.2-vision-11b", "pyt_train_llama-3.2-vision-90b"] %} .. note:: For LoRA and QLoRA support with vision models (Llama 3.2 11B and 90B), use the following torchtune commit for compatibility: .. code-block:: shell git checkout 48192e23188b1fc524dd6d127725ceb2348e7f0e {% elif model.mad_tag in ["pyt_train_llama-2-7b", "pyt_train_llama-2-13b", "pyt_train_llama-2-70b"] %} .. note:: You might encounter the following error with Llama 2: ``ValueError: seq_len (16384) of input tensor should be smaller than max_seq_len (4096)``. This error indicates that an input sequence is longer than the model's maximum context window. Ensure your tokenized input does not exceed the model's ``max_seq_len`` (4096 tokens in this case). You can resolve this by truncating the input or splitting it into smaller chunks before passing it to the model. Note on reproducibility: The results in this guide are based on commit ``b4c98ac`` from the upstream ``__ repository. For the latest updates, you can use the main branch. {% endif %} {% endif %} {% endfor %} {% endfor %} .. rubric:: Benchmarking examples For examples of benchmarking commands, see ``__. .. _amd-pytorch-training-multinode-examples-v2510: Multi-node training ------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`rocm-for-ai-multi-node-setup-pyt-train-example` for example Slurm run commands. Pre-training ~~~~~~~~~~~~ Multi-node training with torchtitan is supported. The provided SLURM script is pre-configured for Llama 3 70B. To launch the training job on a SLURM cluster for Llama 3 70B, run the following commands from the MAD repository. .. code-block:: shell # In the MAD repository cd scripts/pytorch_train sbatch run_slurm_train.sh Fine-tuning ~~~~~~~~~~~ Multi-node training with torchtune is supported. The provided SLURM script is pre-configured for Llama 3.3 70B. To launch the training job on a SLURM cluster for Llama 3.3 70B, run the following commands from the MAD repository. .. code-block:: shell huggingface-cli login # Get access to HF Llama model space huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --local-dir ./models/Llama-3.3-70B-Instruct # Download the Llama 3.3 model locally # In the MAD repository cd scripts/pytorch_train sbatch Torchtune_Multinode.sh .. note:: Information regarding benchmark setup: * By default, Llama 3.3 70B is fine-tuned using ``alpaca_dataset``. * You can adjust the torchtune `YAML configuration file `__ if you're using a different model. * The number of nodes and other parameters can be tuned in the SLURM script ``Torchtune_Multinode.sh``. * Set the ``mounting_paths`` inside the SLURM script. Once the run is finished, you can find the log files in the ``result_torchtune/`` directory. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm PyTorch training performance documentation. See :doc:`../pytorch-training` for the latest version. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The PyTorch for ROCm training Docker (``rocm/pytorch-training:v25.3``) image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.0 | +--------------------------+--------------------------------+ | PyTorch | 2.7.0a0+git637433 | +--------------------------+--------------------------------+ | Python | 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.11 | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0 | +--------------------------+--------------------------------+ | hipBLASLt | git258a2162 | +--------------------------+--------------------------------+ | Triton | 3.1 | +--------------------------+--------------------------------+ .. _amd-pytorch-training-model-support-v253: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI300X GPU. * Llama 3.1 8B * Llama 3.1 70B * FLUX.1-dev .. note:: Only these models are supported in the following steps. Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). System validation ================= If you have already validated your system settings, skip this step. Otherwise, complete the :ref:`system validation and optimization steps ` to set up your system before starting training. Disable NUMA auto-balancing --------------------------- Generally, application performance can benefit from disabling NUMA auto-balancing. However, it might be detrimental to performance with certain types of workloads. Run the command ``cat /proc/sys/kernel/numa_balancing`` to check your current NUMA (Non-Uniform Memory Access) settings. Output ``0`` indicates this setting is disabled. If there is no output or the output is ``1``, run the following command to disable NUMA auto-balancing. .. code-block:: shell sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing' See :ref:`System validation and optimization ` for more information. Environment setup ================= This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/pytorch-training:v25.3 2. Run the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 64G --name training_env rocm/pytorch-training:v25.3 3. Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 4. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch-train Prepare training datasets and dependencies ------------------------------------------ The following benchmarking examples may require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh ``pytorch_benchmark_setup.sh`` installs the following libraries: .. list-table:: :header-rows: 1 * - Library - Benchmark model - Reference * - ``accelerate`` - Llama 3.1 8B, FLUX - `Hugging Face Accelerate `_ * - ``datasets`` - Llama 3.1 8B, 70B, FLUX - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - Llama 3.1 70B - `TorchData `_ * - ``tomli`` - Llama 3.1 70B - `Tomli `_ * - ``tiktoken`` - Llama 3.1 70B - `tiktoken `_ * - ``blobfile`` - Llama 3.1 70B - `blobfile `_ * - ``tabulate`` - Llama 3.1 70B - `tabulate `_ * - ``wandb`` - Llama 3.1 70B - `Weights & Biases `_ * - ``sentencepiece`` - Llama 3.1 70B, FLUX - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - Llama 3.1 70 B, FLUX - `TensorBoard `_ 2.18.0 * - ``csvkit`` - FLUX - `csvkit `_ 2.0.1 * - ``deepspeed`` - FLUX - `DeepSpeed `_ 0.16.2 * - ``diffusers`` - FLUX - `Hugging Face Diffusers `_ 0.31.0 * - ``GitPython`` - FLUX - `GitPython `_ 3.1.44 * - ``opencv-python-headless`` - FLUX - `opencv-python-headless `_ 4.10.0.84 * - ``peft`` - FLUX - `PEFT `_ 0.14.0 * - ``protobuf`` - FLUX - `Protocol Buffers `_ 5.29.2 * - ``pytest`` - FLUX - `PyTest `_ 8.3.4 * - ``python-dotenv`` - FLUX - `python-dotenv `_ 1.0.1 * - ``seaborn`` - FLUX - `Seaborn `_ 0.13.2 * - ``transformers`` - FLUX - `Transformers `_ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following models from Hugging Face: * `meta-llama/Llama-3.1-70B-Instruct `_ * `black-forest-labs/FLUX.1-dev `_ Along with the following datasets: * `WikiText `_ * `bghira/pseudo-camera-10k `_ Start training on AMD Instinct GPUs =========================================== The prebuilt PyTorch with ROCm training environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 3.1 and Llama 2. This container should not be expected to provide generalized performance across all training workloads. You can expect the container to perform in the model configurations described in the following section, but other configurations are not validated by AMD. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI300X Series GPUs with the AMD PyTorch training Docker image. Once your environment is set up, use the following commands and examples to start benchmarking. Pretraining ----------- To start the pretraining benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode -m $model_repo -p $datatype -s $sequence_length Options and available models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :header-rows: 1 * - Name - Options - Description * - ``$training_mode`` - ``pretrain`` - Benchmark pretraining * - - ``finetune_fw`` - Benchmark full weight fine-tuning (Llama 3.1 70B with BF16) * - - ``finetune_lora`` - Benchmark LoRA fine-tuning (Llama 3.1 70B with BF16) * - ``$datatype`` - FP8 or BF16 - Only Llama 3.1 8B supports FP8 precision. * - ``$model_repo`` - Llama-3.1-8B - `Llama 3.1 8B `_ * - - Llama-3.1-70B - `Llama 3.1 70B `_ * - - Flux - `FLUX.1 [dev] `_ Fine-tuning ----------- To start the fine-tuning benchmark, use the following command. It will run the benchmarking example of Llama 2 70B with the WikiText dataset using the AMD fork of `torchtune `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t {finetune_fw, finetune_lora} -p BF16 -m Llama-3.1-70B Benchmarking examples --------------------- Here are some examples of how to use the command. * Example 1: Llama 3.1 70B with BF16 precision with `torchtitan `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p BF16 -m Llama-3.1-70B -s 8192 * Example 2: Llama 3.1 8B with FP8 precision using Transformer Engine (TE) and Hugging Face Accelerator. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p FP8 -m Llama-3.1-70B -s 8192 * Example 3: FLUX.1-dev with BF16 precision with FluxBenchmark. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p BF16 -m Flux * Example 4: Torchtune full weight fine-tuning with Llama 3.1 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_fw -p BF16 -m Llama-3.1-70B * Example 5: Torchtune LoRA fine-tuning with Llama 3.1 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_lora -p BF16 -m Llama-3.1-70B Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm PyTorch training performance documentation. See :doc:`../pytorch-training` for the latest version. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The PyTorch for ROCm training Docker (``rocm/pytorch-training:v25.4``) image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.0 | +--------------------------+--------------------------------+ | PyTorch | 2.7.0a0+git637433 | +--------------------------+--------------------------------+ | Python | 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.11 | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0 | +--------------------------+--------------------------------+ | hipBLASLt | git258a2162 | +--------------------------+--------------------------------+ | Triton | 3.1 | +--------------------------+--------------------------------+ .. _amd-pytorch-training-model-support-v254: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. * Llama 3.1 8B * Llama 3.1 70B * Llama 2 70B * FLUX.1-dev .. note:: Only these models are supported in the following steps. Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). .. _amd-pytorch-training-performance-measurements-v254: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= If you have already validated your system settings, including NUMA auto-balancing, skip this step. Otherwise, complete the :ref:`system validation and optimization steps ` to set up your system before starting training. Environment setup ================= This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Download the Docker image ------------------------- 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/pytorch-training:v25.4 2. Run the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 64G --name training_env rocm/pytorch-training:v25.4 3. Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 4. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train Prepare training datasets and dependencies ------------------------------------------ The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh ``pytorch_benchmark_setup.sh`` installs the following libraries: .. list-table:: :header-rows: 1 * - Library - Benchmark model - Reference * - ``accelerate`` - Llama 3.1 8B, FLUX - `Hugging Face Accelerate `_ * - ``datasets`` - Llama 3.1 8B, 70B, FLUX - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - Llama 3.1 70B - `TorchData `_ * - ``tomli`` - Llama 3.1 70B - `Tomli `_ * - ``tiktoken`` - Llama 3.1 70B - `tiktoken `_ * - ``blobfile`` - Llama 3.1 70B - `blobfile `_ * - ``tabulate`` - Llama 3.1 70B - `tabulate `_ * - ``wandb`` - Llama 3.1 70B - `Weights & Biases `_ * - ``sentencepiece`` - Llama 3.1 70B, FLUX - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - Llama 3.1 70 B, FLUX - `TensorBoard `_ 2.18.0 * - ``csvkit`` - FLUX - `csvkit `_ 2.0.1 * - ``deepspeed`` - FLUX - `DeepSpeed `_ 0.16.2 * - ``diffusers`` - FLUX - `Hugging Face Diffusers `_ 0.31.0 * - ``GitPython`` - FLUX - `GitPython `_ 3.1.44 * - ``opencv-python-headless`` - FLUX - `opencv-python-headless `_ 4.10.0.84 * - ``peft`` - FLUX - `PEFT `_ 0.14.0 * - ``protobuf`` - FLUX - `Protocol Buffers `_ 5.29.2 * - ``pytest`` - FLUX - `PyTest `_ 8.3.4 * - ``python-dotenv`` - FLUX - `python-dotenv `_ 1.0.1 * - ``seaborn`` - FLUX - `Seaborn `_ 0.13.2 * - ``transformers`` - FLUX - `Transformers `_ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following models from Hugging Face: * `meta-llama/Llama-3.1-70B-Instruct `_ * `black-forest-labs/FLUX.1-dev `_ Along with the following datasets: * `WikiText `_ * `UltraChat 200k `_ * `bghira/pseudo-camera-10k `_ Getting started =============== The prebuilt PyTorch with ROCm training environment allows users to quickly validate system performance, conduct training benchmarks, and achieve superior performance for models like Llama 3.1 and Llama 2. This container should not be expected to provide generalized performance across all training workloads. You can expect the container to perform in the model configurations described in the following section, but other configurations are not validated by AMD. Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on MI325X and MI300X GPUs with the AMD PyTorch training Docker image. Once your environment is set up, use the following commands and examples to start benchmarking. Pretraining ----------- To start the pretraining benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode -m $model_repo -p $datatype -s $sequence_length Options and available models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :header-rows: 1 * - Name - Options - Description * - ``$training_mode`` - ``pretrain`` - Benchmark pretraining * - - ``finetune_fw`` - Benchmark full weight fine-tuning (Llama 3.1 70B with BF16) * - - ``finetune_lora`` - Benchmark LoRA fine-tuning (Llama 3.1 70B with BF16) * - - ``HF_finetune_lora`` - Benchmark LoRA fine-tuning with Hugging Face PEFT (Llama 2 70B with BF16) * - ``$datatype`` - ``FP8`` or ``BF16`` - Only Llama 3.1 8B supports FP8 precision. * - ``$model_repo`` - ``Llama-3.1-8B`` - `Llama 3.1 8B `_ * - - ``Llama-3.1-70B`` - `Llama 3.1 70B `_ * - - ``Llama-2-70B`` - `Llama 2 70B `_ * - - ``Flux`` - `FLUX.1 [dev] `_ * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. .. note:: Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. Fine-tuning ----------- To start the fine-tuning benchmark, use the following command. It will run the benchmarking example of Llama 3.1 70B with the WikiText dataset using the AMD fork of `torchtune `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t {finetune_fw, finetune_lora} -p BF16 -m Llama-3.1-70B Use the following command to run the benchmarking example of Llama 2 70B with the UltraChat 200k dataset using `Hugging Face PEFT `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t HF_finetune_lora -p BF16 -m Llama-2-70B Benchmarking examples --------------------- Here are some examples of how to use the command. * Example 1: Llama 3.1 70B with BF16 precision with `torchtitan `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p BF16 -m Llama-3.1-70B -s 8192 * Example 2: Llama 3.1 8B with FP8 precision using Transformer Engine (TE) and Hugging Face Accelerator. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p FP8 -m Llama-3.1-70B -s 8192 * Example 3: FLUX.1-dev with BF16 precision with FluxBenchmark. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p BF16 -m Flux * Example 4: Torchtune full weight fine-tuning with Llama 3.1 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_fw -p BF16 -m Llama-3.1-70B * Example 5: Torchtune LoRA fine-tuning with Llama 3.1 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_lora -p BF16 -m Llama-3.1-70B * Example 6: Hugging Face PEFT LoRA fine-tuning with Llama 2 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t HF_finetune_lora -p BF16 -m Llama-2-70B Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM performance benchmark documentation. See :doc:`../pytorch-training` for the latest version. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The `PyTorch for ROCm training Docker `_ (``rocm/pytorch-training:v25.5``) image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.4 | +--------------------------+--------------------------------+ | PyTorch | 2.7.0a0+git637433 | +--------------------------+--------------------------------+ | Python | 3.10 | +--------------------------+--------------------------------+ | Transformer Engine | 1.12.0.dev0+25a33da | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0 | +--------------------------+--------------------------------+ | hipBLASLt | git53b53bf | +--------------------------+--------------------------------+ | Triton | 3.2.0 | +--------------------------+--------------------------------+ .. _amd-pytorch-training-model-support-v255: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. * Llama 3.3 70B * Llama 3.1 8B * Llama 3.1 70B * Llama 2 70B * FLUX.1-dev .. note:: Only these models are supported in the following steps. Some models, such as Llama 3, require an external license agreement through a third party (for example, Meta). .. _amd-pytorch-training-performance-measurements-v255: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Benchmarking ============ Once the setup is complete, choose between two options to start benchmarking: .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt For example, use this command to run the performance benchmark test on the Llama 3.1 8B model using one GPU with the float16 data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" python3 tools/run_models.py --tags pyt_train_llama-3.1-8b --keep-model-dir --live-output --timeout 28800 The available models for MAD-integrated benchmarking are: * ``pyt_train_llama-3.3-70b`` * ``pyt_train_llama-3.1-8b`` * ``pyt_train_llama-3.1-70b`` * ``pyt_train_flux`` MAD launches a Docker container with the name ``container_ci-pyt_train_llama-3.1-8b``, for example. The latency and throughput reports of the model are collected in the following path: ``~/MAD/perf.csv``. .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required packages Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull rocm/pytorch-training:v25.5 Run the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 64G --name training_env rocm/pytorch-training:v25.5 Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh ``pytorch_benchmark_setup.sh`` installs the following libraries: .. list-table:: :header-rows: 1 * - Library - Benchmark model - Reference * - ``accelerate`` - Llama 3.1 8B, FLUX - `Hugging Face Accelerate `_ * - ``datasets`` - Llama 3.1 8B, 70B, FLUX - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - Llama 3.1 70B - `TorchData `_ * - ``tomli`` - Llama 3.1 70B - `Tomli `_ * - ``tiktoken`` - Llama 3.1 70B - `tiktoken `_ * - ``blobfile`` - Llama 3.1 70B - `blobfile `_ * - ``tabulate`` - Llama 3.1 70B - `tabulate `_ * - ``wandb`` - Llama 3.1 70B - `Weights & Biases `_ * - ``sentencepiece`` - Llama 3.1 70B, FLUX - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - Llama 3.1 70 B, FLUX - `TensorBoard `_ 2.18.0 * - ``csvkit`` - FLUX - `csvkit `_ 2.0.1 * - ``deepspeed`` - FLUX - `DeepSpeed `_ 0.16.2 * - ``diffusers`` - FLUX - `Hugging Face Diffusers `_ 0.31.0 * - ``GitPython`` - FLUX - `GitPython `_ 3.1.44 * - ``opencv-python-headless`` - FLUX - `opencv-python-headless `_ 4.10.0.84 * - ``peft`` - FLUX - `PEFT `_ 0.14.0 * - ``protobuf`` - FLUX - `Protocol Buffers `_ 5.29.2 * - ``pytest`` - FLUX - `PyTest `_ 8.3.4 * - ``python-dotenv`` - FLUX - `python-dotenv `_ 1.0.1 * - ``seaborn`` - FLUX - `Seaborn `_ 0.13.2 * - ``transformers`` - FLUX - `Transformers `_ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following models from Hugging Face: * `meta-llama/Llama-3.1-70B-Instruct `_ * `black-forest-labs/FLUX.1-dev `_ Along with the following datasets: * `WikiText `_ * `UltraChat 200k `_ * `bghira/pseudo-camera-10k `_ .. rubric:: Pretraining To start the pretraining benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode -m $model_repo -p $datatype -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description * - ``$training_mode`` - ``pretrain`` - Benchmark pretraining * - - ``finetune_fw`` - Benchmark full weight fine-tuning (Llama 3.1 70B with BF16) * - - ``finetune_lora`` - Benchmark LoRA fine-tuning (Llama 3.1 70B with BF16) * - - ``HF_finetune_lora`` - Benchmark LoRA fine-tuning with Hugging Face PEFT (Llama 2 70B with BF16) * - ``$datatype`` - ``FP8`` or ``BF16`` - Only Llama 3.1 8B supports FP8 precision. * - ``$model_repo`` - ``Llama-3.3-70B`` - `Llama 3.3 70B `_ * - - ``Llama-3.1-8B`` - `Llama 3.1 8B `_ * - - ``Llama-3.1-70B`` - `Llama 3.1 70B `_ * - - ``Llama-2-70B`` - `Llama 2 70B `_ * - - ``Flux`` - `FLUX.1 [dev] `_ * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. .. note:: Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command. It will run the benchmarking example of Llama 3.1 70B with the WikiText dataset using the AMD fork of `torchtune `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t {finetune_fw, finetune_lora} -p BF16 -m Llama-3.1-70B Use the following command to run the benchmarking example of Llama 2 70B with the UltraChat 200k dataset using `Hugging Face PEFT `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t HF_finetune_lora -p BF16 -m Llama-2-70B .. rubric:: Benchmarking examples Here are some example commands to get started pretraining and fine-tuning with various model configurations. * Example 1: Llama 3.1 70B with BF16 precision with `torchtitan `_. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p BF16 -m Llama-3.1-70B -s 8192 * Example 2: Llama 3.1 8B with FP8 precision using Transformer Engine (TE) and Hugging Face Accelerator. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p FP8 -m Llama-3.1-70B -s 8192 * Example 3: FLUX.1-dev with BF16 precision with FluxBenchmark. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -p BF16 -m Flux * Example 4: Torchtune full weight fine-tuning with Llama 3.1 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_fw -p BF16 -m Llama-3.1-70B * Example 5: Torchtune LoRA fine-tuning with Llama 3.1 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_lora -p BF16 -m Llama-3.1-70B * Example 6: Torchtune full weight fine-tuning with Llama-3.3-70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_fw -p BF16 -m Llama-3.3-70B * Example 7: Torchtune LoRA fine-tuning with Llama-3.3-70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_lora -p BF16 -m Llama-3.3-70B * Example 8: Torchtune QLoRA fine-tuning with Llama-3.3-70B .. code-block:: shell ./pytorch_benchmark_report.sh -t finetune_qlora -p BF16 -m Llama-3.3-70B * Example 9: Hugging Face PEFT LoRA fine-tuning with Llama 2 70B .. code-block:: shell ./pytorch_benchmark_report.sh -t HF_finetune_lora -p BF16 -m Llama-2-70B Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm vLLM performance benchmark documentation. See :doc:`../pytorch-training` for the latest version. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The `PyTorch for ROCm training Docker `_ (``rocm/pytorch-training:v25.6``) image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: +--------------------------+--------------------------------+ | Software component | Version | +==========================+================================+ | ROCm | 6.3.4 | +--------------------------+--------------------------------+ | PyTorch | 2.8.0a0+git7d205b2 | +--------------------------+--------------------------------+ | Python | 3.10.17 | +--------------------------+--------------------------------+ | Transformer Engine | 1.14.0+2f85f5f2 | +--------------------------+--------------------------------+ | Flash Attention | 3.0.0.post1 | +--------------------------+--------------------------------+ | hipBLASLt | 0.15.0-8c6919d | +--------------------------+--------------------------------+ | Triton | 3.3.0 | +--------------------------+--------------------------------+ .. _amd-pytorch-training-model-support-v256: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.6-benchmark-models.yaml {% set unified_docker = data.unified_docker.latest %} {% set model_groups = data.model_groups %} .. raw:: html
Workload
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Model
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models require an external license agreement through a third party (for example, Meta). .. _amd-pytorch-training-performance-measurements-v256: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t validate configurations and run conditions outside those described. Benchmarking ============ Once the setup is complete, choose between two options to start benchmarking: .. tab-set:: .. tab-item:: MAD-integrated benchmarking Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} For example, use this command to run the performance benchmark test on the {{ model.model }} model using one GPU with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``, for example. The latency and throughput reports of the model are collected in the following path: ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required packages Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} Run the Docker container. .. code-block:: shell docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $HOME:$HOME -v $HOME/.ssh:/root/.ssh --shm-size 64G --name training_env {{ unified_docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. container:: model-doc pyt_train_llama-3.1-8b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 8B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 .. container:: model-doc pyt_train_llama-3.1-70b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 70B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - `TorchData `_ * - ``tomli`` - `Tomli `_ * - ``tiktoken`` - `tiktoken `_ * - ``blobfile`` - `blobfile `_ * - ``tabulate`` - `tabulate `_ * - ``wandb`` - `Weights & Biases `_ * - ``sentencepiece`` - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - `TensorBoard `_ 2.18.0 .. container:: model-doc pyt_train_flux ``pytorch_benchmark_setup.sh`` installs the following libraries for FLUX: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``sentencepiece`` - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - `TensorBoard `_ 2.18.0 * - ``csvkit`` - `csvkit `_ 2.0.1 * - ``deepspeed`` - `DeepSpeed `_ 0.16.2 * - ``diffusers`` - `Hugging Face Diffusers `_ 0.31.0 * - ``GitPython`` - `GitPython `_ 3.1.44 * - ``opencv-python-headless`` - `opencv-python-headless `_ 4.10.0.84 * - ``peft`` - `PEFT `_ 0.14.0 * - ``protobuf`` - `Protocol Buffers `_ 5.29.2 * - ``pytest`` - `PyTest `_ 8.3.4 * - ``python-dotenv`` - `python-dotenv `_ 1.0.1 * - ``seaborn`` - `Seaborn `_ 0.13.2 * - ``transformers`` - `Transformers `_ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following datasets from Hugging Face: * `bghira/pseudo-camera-10k `_ {% for model_group in model_groups %} {% for model in model_group.models %} {% if model_group.tag == "pre-training" and model.mad_tag in ["pyt_train_llama-3.1-8b", "pyt_train_llama-3.1-70b", "pyt_train_flux"] %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Pretraining To start the pre-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -m {{ model.model_repo }} -p $datatype -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% if model.mad_tag == "pyt_train_llama-3.1-8b" %} * - ``$datatype`` - ``BF16`` or ``FP8`` - Only Llama 3.1 8B supports FP8 precision. {% else %} * - ``$datatype`` - ``BF16`` - Only Llama 3.1 8B supports FP8 precision. {% endif %} * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% if model.mad_tag == "pyt_train_flux" %} .. container:: model-doc {{ model.mad_tag }} .. note:: Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. {% endif %} {% endif %} {% if model_group.tag == "fine-tuning" %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode -m {{ model.model_repo }} -p BF16 -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description * - ``$training_mode`` - ``finetune_fw`` - Full weight fine-tuning (BF16 supported) * - - ``finetune_lora`` - LoRA fine-tuning (BF16 supported) * - - ``finetune_qlora`` - QLoRA fine-tuning (BF16 supported) * - - ``HF_finetune_lora`` - LoRA fine-tuning with Hugging Face PEFT * - ``$datatype`` - ``BF16`` - All models support BF16. * - ``$sequence_length`` - Between 2048 and 16384. - Sequence length for the language model. .. note:: {{ model.model }} currently supports the following fine-tuning methods: {% for method in model.training_modes %} * ``{{ method }}`` {% endfor %} {% if model.training_modes|length < 4 %} The upstream `torchtune `_ repository does not currently provide YAML configuration files for other combinations of model to fine-tuning method However, you can still configure your own YAML files to enable support for fine-tuning methods not listed here by following existing patterns in the ``/workspace/torchtune/recipes/configs`` directory. {% endif %} {% endif %} {% endfor %} {% endfor %} .. rubric:: Benchmarking examples For examples of benchmarking commands, see ``__. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch for ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm PyTorch training performance benchmark documentation. See :doc:`../pytorch-training` for the latest version. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.7-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} The `PyTorch for ROCm training Docker <{{ docker.docker_hub_url }}>`__ (``{{ docker.pull_tag }}``) image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-pytorch-training-model-support-v257: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.7-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _amd-pytorch-training-supported-training-modes-v257: The following table lists supported training modes per model. .. dropdown:: Supported training modes .. list-table:: :header-rows: 1 * - Model - Supported training modes {% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} * - {{ model.model }} - ``{{ model.training_modes | join('``, ``') }}`` {% endfor %} {% endfor %} .. note:: Some model and fine-tuning combinations are not listed. This is because the `upstream torchtune repository `__ doesn't provide default YAML configurations for them. For advanced usage, you can create a custom configuration to enable unlisted fine-tuning methods by using an existing file in the ``/workspace/torchtune/recipes/configs`` directory as a template. .. _amd-pytorch-training-performance-measurements-v257: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Run training ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.7-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to start benchmarking training: .. tab-set:: .. tab-item:: MAD-integrated benchmarking 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking .. rubric:: Download the Docker image and required packages 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} 2. Run the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ unified_docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. container:: model-doc pyt_train_llama-3.1-8b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 8B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 .. container:: model-doc pyt_train_llama-3.1-70b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 70B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - `TorchData `_ * - ``tomli`` - `Tomli `_ * - ``tiktoken`` - `tiktoken `_ * - ``blobfile`` - `blobfile `_ * - ``tabulate`` - `tabulate `_ * - ``wandb`` - `Weights & Biases `_ * - ``sentencepiece`` - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - `TensorBoard `_ 2.18.0 .. container:: model-doc pyt_train_flux ``pytorch_benchmark_setup.sh`` installs the following libraries for FLUX: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``sentencepiece`` - `SentencePiece `_ 0.2.0 * - ``tensorboard`` - `TensorBoard `_ 2.18.0 * - ``csvkit`` - `csvkit `_ 2.0.1 * - ``deepspeed`` - `DeepSpeed `_ 0.16.2 * - ``diffusers`` - `Hugging Face Diffusers `_ 0.31.0 * - ``GitPython`` - `GitPython `_ 3.1.44 * - ``opencv-python-headless`` - `opencv-python-headless `_ 4.10.0.84 * - ``peft`` - `PEFT `_ 0.14.0 * - ``protobuf`` - `Protocol Buffers `_ 5.29.2 * - ``pytest`` - `PyTest `_ 8.3.4 * - ``python-dotenv`` - `python-dotenv `_ 1.0.1 * - ``seaborn`` - `Seaborn `_ 0.13.2 * - ``transformers`` - `Transformers `_ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following datasets from Hugging Face: * `bghira/pseudo-camera-10k `_ {% for model_group in model_groups %} {% for model in model_group.models %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "pretrain": "Benchmark pre-training.", "HF_pretrain": "Llama 3.1 8B pre-training with FP8 precision." } %} {% set available_modes = training_modes | select("in", ["pretrain", "HF_pretrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Pre-training To start the pre-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length {% if model.mad_tag == "pyt_train_flux" %} .. container:: model-doc {{ model.mad_tag }} .. note:: Currently, FLUX models are not supported out-of-the-box on {{ unified_docker.pull_tag }}. To use FLUX, refer to the previous version of the ``pytorch-training`` Docker: :doc:`pytorch-training-v25.6` Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. {% endif %} .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% set training_mode_descs = { "finetune_fw": "Full weight fine-tuning (BF16 and FP8 supported).", "finetune_lora": "LoRA fine-tuning (BF16 supported).", "finetune_qlora": "QLoRA fine-tuning (BF16 supported).", "HF_finetune_lora": "LoRA fine-tuning with Hugging Face PEFT.", } %} {% set available_modes = training_modes | select("in", ["finetune_fw", "finetune_lora", "finetune_qlora", "HF_finetune_lora"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. See :ref:`supported training modes `. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if "finetune_fw" in available_modes %} or ``FP8``{% endif %} - All models support BF16.{% if "finetune_fw" in available_modes %} FP8 is only available for full weight fine-tuning.{% endif %} * - ``$sequence_length`` - Between 2048 and 16384. - Sequence length for the language model. {% if model.mad_tag in ["pyt_train_llama3.2-vision-11b", "pyt_train_llama-3.2-vision-90b"] %} .. note:: For LoRA and QLoRA support with vision models (Llama 3.2 11B and 90B), use the following torchtune commit for compatibility: .. code-block:: shell git checkout 48192e23188b1fc524dd6d127725ceb2348e7f0e {% elif model.mad_tag in ["pyt_train_llama-2-7b", "pyt_train_llama-2-13b", "pyt_train_llama-2-70b"] %} .. note:: You might encounter the following error with Llama 2: ``ValueError: seq_len (16384) of input tensor should be smaller than max_seq_len (4096)``. This error indicates that an input sequence is longer than the model's maximum context window. Ensure your tokenized input does not exceed the model's ``max_seq_len`` (4096 tokens in this case). You can resolve this by truncating the input or splitting it into smaller chunks before passing it to the model. Note on reproducibility: The results in this guide are based on commit ``b4c98ac`` from the upstream ``__ repository. For the latest updates, you can use the main branch. {% endif %} {% endif %} {% endfor %} {% endfor %} .. rubric:: Benchmarking examples For examples of benchmarking commands, see ``__. Multi-node training ------------------- Pre-training ~~~~~~~~~~~~ Multi-node training with torchtitan is supported. The provided SLURM script is pre-configured for Llama 3 70B. To launch the training job on a SLURM cluster for Llama 3 70B, run the following commands from the MAD repository. .. code-block:: shell # In the MAD repository cd scripts/pytorch_train sbatch run_slurm_train.sh Fine-tuning ~~~~~~~~~~~ Multi-node training with torchtune is supported. The provided SLURM script is pre-configured for Llama 3.3 70B. To launch the training job on a SLURM cluster for Llama 3.3 70B, run the following commands from the MAD repository. .. code-block:: shell huggingface-cli login # Get access to HF Llama model space huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --local-dir ./models/Llama-3.3-70B-Instruct # Download the Llama 3.3 model locally # In the MAD repository cd scripts/pytorch_train sbatch Torchtune_Multinode.sh .. note:: Information regarding benchmark setup: * By default, Llama 3.3 70B is fine-tuned using ``alpaca_dataset``. * You can adjust the torchtune `YAML configuration file `__ if you're using a different model. * The number of nodes and other parameters can be tuned in the SLURM script ``Torchtune_Multinode.sh``. * Set the ``mounting_paths`` inside the SLURM script. Once the run is finished, you can find the log files in the ``result_torchtune/`` directory. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch on ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm PyTorch training performance benchmark documentation. See :doc:`../pytorch-training` for the latest version. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.8-benchmark-models.yaml {% set dockers = data.dockers %} {% set docker = dockers[0] %} The `PyTorch for ROCm training Docker <{{ docker.docker_hub_url }}>`__ (``{{ docker.pull_tag }}``) image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-pytorch-training-model-support: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.8-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _amd-pytorch-training-supported-training-modes: The following table lists supported training modes per model. .. dropdown:: Supported training modes .. list-table:: :header-rows: 1 * - Model - Supported training modes {% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if model.training_modes %} * - {{ model.model }} - ``{{ model.training_modes | join('``, ``') }}`` {% endif %} {% endfor %} {% endfor %} .. note:: Some model and fine-tuning combinations are not listed. This is because the `upstream torchtune repository `__ doesn't provide default YAML configurations for them. For advanced usage, you can create a custom configuration to enable unlisted fine-tuning methods by using an existing file in the ``/workspace/torchtune/recipes/configs`` directory as a template. .. _amd-pytorch-training-performance-measurements: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Run training ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.8-benchmark-models.yaml {% set unified_docker = data.dockers[0] %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to start benchmarking training: .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following commands are tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support` to switch to another available model. {% endfor %} {% endfor %} .. rubric:: Download the Docker image and required packages 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ unified_docker.pull_tag }} 2. Run the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ unified_docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. container:: model-doc pyt_train_llama-3.1-8b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 8B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 .. container:: model-doc pyt_train_llama-3.1-70b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 70B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - `TorchData `__ * - ``tomli`` - `Tomli `__ * - ``tiktoken`` - `tiktoken `__ * - ``blobfile`` - `blobfile `__ * - ``tabulate`` - `tabulate `__ * - ``wandb`` - `Weights & Biases `__ * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 .. container:: model-doc pyt_train_flux ``pytorch_benchmark_setup.sh`` installs the following libraries for FLUX: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `__ 3.2.0 * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 * - ``csvkit`` - `csvkit `__ 2.0.1 * - ``deepspeed`` - `DeepSpeed `__ 0.16.2 * - ``diffusers`` - `Hugging Face Diffusers `__ 0.31.0 * - ``GitPython`` - `GitPython `__ 3.1.44 * - ``opencv-python-headless`` - `opencv-python-headless `__ 4.10.0.84 * - ``peft`` - `PEFT `__ 0.14.0 * - ``protobuf`` - `Protocol Buffers `__ 5.29.2 * - ``pytest`` - `PyTest `__ 8.3.4 * - ``python-dotenv`` - `python-dotenv `__ 1.0.1 * - ``seaborn`` - `Seaborn `__ 0.13.2 * - ``transformers`` - `Transformers `__ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following datasets from Hugging Face: * `bghira/pseudo-camera-10k `__ {% for model_group in model_groups %} {% for model in model_group.models %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "pretrain": "Benchmark pre-training.", "HF_pretrain": "Llama 3.1 8B pre-training with FP8 precision." } %} {% set available_modes = training_modes | select("in", ["pretrain", "HF_pretrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Pre-training To start the pre-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length {% if model.mad_tag == "pyt_train_flux" %} .. container:: model-doc {{ model.mad_tag }} .. note:: Currently, FLUX models are not supported out-of-the-box on {{ unified_docker.pull_tag }}. To use FLUX, refer to ``rocm/pytorch-training`` Docker: :doc:`pytorch-training-v25.6` Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. {% endif %} .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% set training_mode_descs = { "finetune_fw": "Full weight fine-tuning (BF16 and FP8 supported).", "finetune_lora": "LoRA fine-tuning (BF16 supported).", "finetune_qlora": "QLoRA fine-tuning (BF16 supported).", "HF_finetune_lora": "LoRA fine-tuning with Hugging Face PEFT.", } %} {% set available_modes = training_modes | select("in", ["finetune_fw", "finetune_lora", "finetune_qlora", "HF_finetune_lora"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. See :ref:`supported training modes `. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if "finetune_fw" in available_modes %} or ``FP8``{% endif %} - All models support BF16.{% if "finetune_fw" in available_modes %} FP8 is only available for full weight fine-tuning.{% endif %} * - ``$sequence_length`` - Between 2048 and 16384. - Sequence length for the language model. {% if model.mad_tag in ["pyt_train_llama3.2-vision-11b", "pyt_train_llama-3.2-vision-90b"] %} .. note:: For LoRA and QLoRA support with vision models (Llama 3.2 11B and 90B), use the following torchtune commit for compatibility: .. code-block:: shell git checkout 48192e23188b1fc524dd6d127725ceb2348e7f0e {% elif model.mad_tag in ["pyt_train_llama-2-7b", "pyt_train_llama-2-13b", "pyt_train_llama-2-70b"] %} .. note:: You might encounter the following error with Llama 2: ``ValueError: seq_len (16384) of input tensor should be smaller than max_seq_len (4096)``. This error indicates that an input sequence is longer than the model's maximum context window. Ensure your tokenized input does not exceed the model's ``max_seq_len`` (4096 tokens in this case). You can resolve this by truncating the input or splitting it into smaller chunks before passing it to the model. Note on reproducibility: The results in this guide are based on commit ``b4c98ac`` from the upstream ``__ repository. For the latest updates, you can use the main branch. {% endif %} {% endif %} {% endfor %} {% endfor %} .. rubric:: Benchmarking examples For examples of benchmarking commands, see ``__. .. _amd-pytorch-training-multinode-examples: Multi-node training ------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`rocm-for-ai-multi-node-setup-pyt-train-example` for example Slurm run commands. Pre-training ~~~~~~~~~~~~ Multi-node training with torchtitan is supported. The provided SLURM script is pre-configured for Llama 3 70B. To launch the training job on a SLURM cluster for Llama 3 70B, run the following commands from the MAD repository. .. code-block:: shell # In the MAD repository cd scripts/pytorch_train sbatch run_slurm_train.sh Fine-tuning ~~~~~~~~~~~ Multi-node training with torchtune is supported. The provided SLURM script is pre-configured for Llama 3.3 70B. To launch the training job on a SLURM cluster for Llama 3.3 70B, run the following commands from the MAD repository. .. code-block:: shell huggingface-cli login # Get access to HF Llama model space huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --local-dir ./models/Llama-3.3-70B-Instruct # Download the Llama 3.3 model locally # In the MAD repository cd scripts/pytorch_train sbatch Torchtune_Multinode.sh .. note:: Information regarding benchmark setup: * By default, Llama 3.3 70B is fine-tuned using ``alpaca_dataset``. * You can adjust the torchtune `YAML configuration file `__ if you're using a different model. * The number of nodes and other parameters can be tuned in the SLURM script ``Torchtune_Multinode.sh``. * Set the ``mounting_paths`` inside the SLURM script. Once the run is finished, you can find the log files in the ``result_torchtune/`` directory. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch on ROCm ************************************** .. caution:: This documentation does not reflect the latest version of ROCm PyTorch training performance benchmark documentation. See :doc:`../pytorch-training` for the latest version. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/pytorch-training `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including torchtitan and :doc:`Megatron-LM <../primus-megatron>`. See :doc:`../primus-pytorch` for details. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The PyTorch for ROCm training Docker image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} {% endfor %} .. _amd-pytorch-training-model-support-v259: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI355X, MI350X, MI325X, and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.9-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _amd-pytorch-training-supported-training-modes-v259: The following table lists supported training modes per model. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.9-benchmark-models.yaml {% set model_groups = data.model_groups %} .. dropdown:: Supported training modes .. list-table:: :header-rows: 1 * - Model - Supported training modes {% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if model.training_modes %} * - {{ model.model }} - ``{{ model.training_modes | join('``, ``') }}`` {% endif %} {% endfor %} {% endfor %} .. note:: Some model and fine-tuning combinations are not listed. This is because the `upstream torchtune repository `__ doesn't provide default YAML configurations for them. For advanced usage, you can create a custom configuration to enable unlisted fine-tuning methods by using an existing file in the ``/workspace/torchtune/recipes/configs`` directory as a template. .. _amd-pytorch-training-performance-measurements-v259: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Run training ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/previous-versions/pytorch-training-v25.9-benchmark-models.yaml {% set dockers = data.dockers %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to start benchmarking training: .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support-v259` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following commands are tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support-v259` to switch to another available model. {% endfor %} {% endfor %} .. rubric:: Download the Docker image and required packages 1. Use the following command to pull the Docker image from Docker Hub. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker pull {{ docker.pull_tag }} {% endfor %} 2. Launch the Docker container. .. tab-set:: {% for supported_gpus, docker in dockers.items() %} .. tab-item:: {{ supported_gpus }} :sync: {{ supported_gpus }} .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} {% endfor %} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. container:: model-doc pyt_train_llama-3.1-8b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 8B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 .. container:: model-doc pyt_train_llama-3.1-70b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 70B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - `TorchData `__ * - ``tomli`` - `Tomli `__ * - ``tiktoken`` - `tiktoken `__ * - ``blobfile`` - `blobfile `__ * - ``tabulate`` - `tabulate `__ * - ``wandb`` - `Weights & Biases `__ * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 .. container:: model-doc pyt_train_flux ``pytorch_benchmark_setup.sh`` installs the following libraries for FLUX: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `__ 3.2.0 * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 * - ``csvkit`` - `csvkit `__ 2.0.1 * - ``deepspeed`` - `DeepSpeed `__ 0.16.2 * - ``diffusers`` - `Hugging Face Diffusers `__ 0.31.0 * - ``GitPython`` - `GitPython `__ 3.1.44 * - ``opencv-python-headless`` - `opencv-python-headless `__ 4.10.0.84 * - ``peft`` - `PEFT `__ 0.14.0 * - ``protobuf`` - `Protocol Buffers `__ 5.29.2 * - ``pytest`` - `PyTest `__ 8.3.4 * - ``python-dotenv`` - `python-dotenv `__ 1.0.1 * - ``seaborn`` - `Seaborn `__ 0.13.2 * - ``transformers`` - `Transformers `__ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following datasets from Hugging Face: * `frank-chieng/chinese_architecture_siheyuan `__ {% for model_group in model_groups %} {% for model in model_group.models %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "pretrain": "Benchmark pre-training.", "HF_pretrain": "Llama 3.1 8B pre-training with FP8 precision." } %} {% set available_modes = training_modes | select("in", ["pretrain", "HF_pretrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Pre-training To start the pre-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length {% if model.mad_tag == "pyt_train_flux" %} .. container:: model-doc {{ model.mad_tag }} .. note:: Currently, FLUX models are not supported out-of-the-box on this Docker. To use FLUX, refer to ``rocm/pytorch-training`` Docker: :doc:`previous-versions/pytorch-training-v25.6` Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. {% endif %} .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "posttrain": "Benchmark post-training.", } %} {% set available_modes = training_modes | select("in", ["posttrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Post-training To start the post-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% set training_mode_descs = { "finetune_fw": "Full weight fine-tuning (BF16 and FP8 supported).", "finetune_lora": "LoRA fine-tuning (BF16 supported).", "finetune_qlora": "QLoRA fine-tuning (BF16 supported).", "HF_finetune_lora": "LoRA fine-tuning with Hugging Face PEFT.", } %} {% set available_modes = training_modes | select("in", ["finetune_fw", "finetune_lora", "finetune_qlora", "HF_finetune_lora"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. See :ref:`supported training modes `. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if "finetune_fw" in available_modes %} or ``FP8``{% endif %} - All models support BF16.{% if "finetune_fw" in available_modes %} FP8 is only available for full weight fine-tuning.{% endif %} * - ``$sequence_length`` - Between 2048 and 16384. - Sequence length for the language model. {% if model.mad_tag in ["pyt_train_llama3.2-vision-11b", "pyt_train_llama-3.2-vision-90b"] %} .. note:: For LoRA and QLoRA support with vision models (Llama 3.2 11B and 90B), use the following torchtune commit for compatibility: .. code-block:: shell git checkout 48192e23188b1fc524dd6d127725ceb2348e7f0e {% elif model.mad_tag in ["pyt_train_llama-2-7b", "pyt_train_llama-2-13b", "pyt_train_llama-2-70b"] %} .. note:: You might encounter the following error with Llama 2: ``ValueError: seq_len (16384) of input tensor should be smaller than max_seq_len (4096)``. This error indicates that an input sequence is longer than the model's maximum context window. Ensure your tokenized input does not exceed the model's ``max_seq_len`` (4096 tokens in this case). You can resolve this by truncating the input or splitting it into smaller chunks before passing it to the model. Note on reproducibility: The results in this guide are based on commit ``b4c98ac`` from the upstream ``__ repository. For the latest updates, you can use the main branch. {% endif %} {% endif %} {% endfor %} {% endfor %} .. rubric:: Benchmarking examples For examples of benchmarking commands, see ``__. .. _amd-pytorch-training-multinode-examples-v259: Multi-node training ------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`rocm-for-ai-multi-node-setup-pyt-train-example` for example Slurm run commands. Pre-training ~~~~~~~~~~~~ Multi-node training with torchtitan is supported. The provided SLURM script is pre-configured for Llama 3 70B. To launch the training job on a SLURM cluster for Llama 3 70B, run the following commands from the MAD repository. .. code-block:: shell # In the MAD repository cd scripts/pytorch_train sbatch run_slurm_train.sh Fine-tuning ~~~~~~~~~~~ Multi-node training with torchtune is supported. The provided SLURM script is pre-configured for Llama 3.3 70B. To launch the training job on a SLURM cluster for Llama 3.3 70B, run the following commands from the MAD repository. .. code-block:: shell huggingface-cli login # Get access to HF Llama model space huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --local-dir ./models/Llama-3.3-70B-Instruct # Download the Llama 3.3 model locally # In the MAD repository cd scripts/pytorch_train sbatch Torchtune_Multinode.sh .. note:: Information regarding benchmark setup: * By default, Llama 3.3 70B is fine-tuned using ``alpaca_dataset``. * You can adjust the torchtune `YAML configuration file `__ if you're using a different model. * The number of nodes and other parameters can be tuned in the SLURM script ``Torchtune_Multinode.sh``. * Set the ``mounting_paths`` inside the SLURM script. Once the run is finished, you can find the log files in the ``result_torchtune/`` directory. Known issues ============ PyTorch Profiler may produce inaccurate traces when CPU activity profiling is enabled. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- .. meta:: :description: How to train a model using Megatron-LM for ROCm. :keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch ******************************************** Training a model with Primus and Megatron-LM ******************************************** `Primus `__ is a unified and flexible training framework for AMD Instinct GPUs designed to support multiple training engine backends -- including Megatron -- to deliver scalable, high-performance model training. Performance acceleration is powered by `Primus Turbo `__ and ROCm libraries. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/megatron-lm `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including Megatron-LM and :doc:`torchtitan `. Primus with Megatron is designed to replace the :doc:`ROCm Megatron-LM training ` workflow. To learn how to migrate workloads from Megatron-LM to Primus with Megatron, see :doc:`previous-versions/megatron-lm-primus-migration-guide`. AMD provides a ready-to-use Docker images for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components for Primus, ROCm, and Megatron-LM. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-megatron-lm-model-support-v25.11: Supported models ================ The following models are pre-optimized for performance on AMD Instinct GPUs. Some instructions, commands, and training examples in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. note:: Some models, such as Llama, require an external license agreement through a third party (for example, Meta). System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. .. _mi300x-amd-primus-megatron-lm-training-v25.11: Environment setup ================= .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml Use the following instructions to set up the environment, configure the script to train models, and reproduce the benchmark results on AMD Instinct GPUs. .. _amd-primus-megatron-lm-requirements-v25.11: Pull the Docker image .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set docker = data.docker %} 1. Pull the ``{{ docker.pull_tag }}`` Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --device /dev/infiniband \ --network host --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ --shm-size 128G \ --name primus_training_env \ {{ docker.pull_tag }} Use these commands if you exit the ``primus_training_env`` container and need to return to it. .. code-block:: shell docker start primus_training_env docker exec -it primus_training_env bash The Docker container hosts verified commit ``c4c083de`` of the `Primus `__ repository. .. _amd-primus-megatron-lm-environment-setup-v25.11: Configuration ============= Primus defines a training configuration in YAML for each model in `examples/megatron/configs `__. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set model_groups = data.model_groups %} {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} For example, to update training parameters for {{ model.model }}, you can update ``examples/megatron/configs/{{ model.config_name }}``. Training configuration YAML files for other models follow this naming convention. {% endfor %} {% endfor %} .. note:: See :ref:`Key options ` for more information on configuration options. Dataset options --------------- You can use either mock data or real data for training. * Mock data can be useful for testing and validation. Use the ``mock_data`` field to toggle between mock and real data. The default value is ``true`` for enabled. .. code-block:: yaml mock_data: true * If you're using a real dataset, update the ``train_data_path`` field to point to the location of your dataset. .. code-block:: bash mock_data: false train_data_path: /path/to/your/dataset Ensure that the files are accessible inside the Docker container. .. _amd-primus-megatron-lm-tokenizer-v25.11: Tokenizer --------- Set the ``HF_TOKEN`` environment variable with right permissions to access the tokenizer for each model. .. code-block:: bash # Export your HF_TOKEN in the workspace export HF_TOKEN= .. note:: In Primus, each model uses a tokenizer from Hugging Face. For example, Llama 3.1 8B model uses ``tokenizer_model: meta-llama/Llama-3.1-8B`` and ``tokenizer_type: Llama3Tokenizer`` defined in the `llama3.1-8B model `__ definition. .. _amd-primus-megatron-lm-run-training-v25.11: Run training ============ Use the following example commands to set up the environment, configure :ref:`key options `, and run training on AMD Instinct GPUs using Primus with the Megatron backend. Single node training -------------------- To run training on a single node, navigate to ``/workspace/Primus`` and use the following setup command: .. code-block:: shell pip install -r requirements.txt export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run pre-training for Llama 3.3 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.3_70B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.3_70B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run pre-training for Llama 3.1 8B FP8, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_8B-FP8-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_8B-FP8-pretrain.yaml \ bash ./examples/run_pretrain.sh For Llama 3.1 8B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_8B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run pre-training for Llama 3.1 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_70B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_70B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh To run the training on a single node for Llama 3.1 70B FP8, use the following command. .. note:: The MI300X configuration uses a proxy model. On MI300X GPUs, use two or more nodes to run the full Llama 3.1 70B model with FP8 precision. MI355X and MI350X GPUs can support the full 70B model with FP8 precision on a single node. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama3.1_70B-FP8-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama3.1_70B-FP8-pretrain.yaml \ bash ./examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 40 \ --fp8 hybrid \ --no_fp8_weight_transpose_cache true .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run pre-training for Llama 2 7B FP8, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama2_7B-FP8-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama2_7B-FP8-pretrain.yaml \ bash ./examples/run_pretrain.sh To run pre-training for Llama 2 7B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama2_7B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama2_7B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run pre-training for Llama 2 70B BF16, run: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/llama2_70B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/llama2_70B-BF16-pretrain.yaml \ bash ./examples/run_pretrain.sh .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v3-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V3. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run training on a single node for DeepSeek-V3 (MoE with expert parallel) BF16 with 3-layer proxy, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/deepseek_v3-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 \ --micro_batch_size 8 \ --global_batch_size 64 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/deepseek_v3-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --num_layers 3 \ --moe_layer_freq 1 \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_deepseek-v2-lite-16b Once setup is complete, run the appropriate training command. The following run commands are tailored to DeepSeek-V2-Lite. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run training on a single node for DeepSeek-V2-Lite (MoE with expert parallel) BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/deepseek_v2_lite-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/deepseek_v2_lite-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x7B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run training on a single node for Mixtral 8x7B (MoE with expert parallel), use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/mixtral_8x7B_v0.1-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/mixtral_8x7B_v0.1-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x22b-proxy Once setup is complete, run the appropriate training command. The following run commands are tailored to Mixtral 8x22B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run training on a single node for Mixtral 8x22B BF16 (MoE with expert parallel) 4-layer proxy, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/mixtral_8x22B_v0.1-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/mixtral_8x22B_v0.1-BF16-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --num_layers 4 \ --pipeline_model_parallel_size 1 \ --micro_batch_size 1 \ --global_batch_size 16 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 7B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run training on a single node for Qwen 2.5 7B BF16, use the following command: .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/qwen2.5_7B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/qwen2.5_7B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh For FP8, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/qwen2.5_7B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/qwen2.5_7B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Qwen 2.5 72B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To run the training on a single node for Qwen 2.5 72B BF16, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI350X .. code-block:: shell EXP=examples/megatron/configs/MI355X/qwen2.5_72B-pretrain.yaml \ bash examples/run_pretrain.sh \ --train_iters 50 \ --micro_batch_size 16 \ --global_batch_size 256 .. tab-item:: MI300X :sync: MI325X and MI300X .. code-block:: shell # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 EXP=examples/megatron/configs/MI300X/qwen2.5_72B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. _amd-primus-megatron-multi-node-examples-v25.11: Multi-node training examples ---------------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. To run training on multiple nodes, you can use the `run_slurm_pretrain.sh `__ to launch the multi-node workload. Use the following steps to setup your environment: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-megatron-benchmark-models.yaml {% set docker = data.docker %} .. code-block:: shell git clone --recurse-submodules https://github.com/AMD-AGI/Primus.git cd Primus git checkout c4c083de64ba3e8f19ccc9629411267108931f9e git submodule update --init --recursive export DOCKER_IMAGE={{ docker.pull_tag }} export HF_TOKEN= export HSA_NO_SCRATCH_RECLAIM=1 export NVTE_CK_USES_BWD_V3=1 export NCCL_IB_HCA= # specify which RDMA interfaces to use for communication export NCCL_SOCKET_IFNAME= # your Network Interface export GLOO_SOCKET_IFNAME= # your Network Interface export NCCL_IB_GID_INDEX=3 # Set InfiniBand GID index for NCCL communication. Default is 3 for ROCE # Set the variables for better performance # only on MI325X and MI300X export PRIMUS_TURBO_ATTN_V3_ATOMIC_FP32=1 export NVTE_CK_IS_V3_ATOMIC_FP32=1 .. note:: * Make sure correct network drivers are installed on the nodes. If inside a Docker, either install the drivers inside the Docker container or pass the network drivers from the host while creating Docker container. * If ``NCCL_IB_HCA`` and ``NCCL_SOCKET_IFNAME`` are not set, Primus will try to auto-detect. However, since NICs can vary accross different cluster, it is encouraged to explicitly export your NCCL parameters for the cluster. * To find your network interface, you can use ``ip a``. * To find RDMA interfaces, you can use ``ibv_devices`` to get the list of all the RDMA/IB devices. * Remember to set ``DOCKER_IMAGE`` and ``HF_TOKEN`` (see :ref:`amd-primus-megatron-lm-tokenizer-v25.11`) as appropriate. .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-8b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 8B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Llama 3.1 8B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama3.1_8B-FP8-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 1024 \ .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 7B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Llama 2 7B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama2_7B-FP8-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --global_batch_size 2048 \ .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.1-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.1 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Llama 3.1 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ To train Llama 3.1 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-2-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Llama 2 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case. NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama2_70B-FP8-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 10 \ --global_batch_size 640 \ --recompute_num_layers 80 \ To train Llama 2 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama2_70B-BF16-pretrain.yaml \ bash ./examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 1536 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_llama-3.3-70b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 3.3 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Llama 3.3 70B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama3.3_70B-FP8-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 4 \ --global_batch_size 256 \ --recompute_num_layers 80 \ To train Llama 3.3 70B BF16 on 8 nodes, run: .. code-block:: shell NNODES=8 \ EXP=examples/megatron/configs/MI300X/llama3.3_70B-BF16-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 1 \ --global_batch_size 256 \ --recompute_num_layers 12 .. container:: model-doc primus_pyt_megatron_lm_train_mixtral-8x7b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Mixtral 8x7B BF16 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/MI300X/mixtral_8x7B_v0.1-BF16-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 2 \ --global_batch_size 256 .. container:: model-doc primus_pyt_megatron_lm_train_qwen2.5-72b Once setup is complete, run the appropriate training command. The following run commands are tailored to Llama 2 70B. See :ref:`amd-primus-megatron-lm-model-support-v25.11` to switch to another available model. To train Qwen2.5 72B FP8 on 8 nodes, run: .. code-block:: shell # Adjust the training parameters. # For example, `global_batch_size: 8 * #single_node_bs` for 8 nodes in this case NNODES=8 \ EXP=examples/megatron/configs/qwen2.5_72B-FP8-pretrain.yaml \ bash examples/run_slurm_pretrain.sh \ --micro_batch_size 8 \ --global_batch_size 512 \ --recompute_num_layers 80 \ .. _amd-primus-megatron-lm-benchmark-test-vars-v25.11: Key options ----------- The following are key options to take note of fp8 ``hybrid`` enables FP8 GEMMs. use_torch_fsdp2 ``use_torch_fsdp2: 1`` enables torch fsdp-v2. If FSDP is enabled, set ``use_distributed_optimizer`` and ``overlap_param_gather`` to ``false``. profile To enable PyTorch profiling, set these parameters: .. code-block:: yaml profile: true use_pytorch_profiler: true profile_step_end: 7 profile_step_start: 6 train_iters The total number of iterations (default: 50). mock_data True by default. micro_batch_size Micro batch size. global_batch_size Global batch size. recompute_granularity For activation checkpointing. num_layers For using a reduced number of layers as with proxy models. Known issues ============ DeepSeekV3 proxy model and Mixtral 8x22B proxy model may exit with an error due to a memory free issue. However, this does not impacts training runs. All iterations, in this case 50, should have been completed before the exit and the results should be available in the end. Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/megatron-lm-history` to find documentation for previous releases of the ``ROCm/megatron-lm`` Docker image. This training environment now uses Primus with Megatron as the primary configuration. Limited support for the legacy ROCm Megatron-LM is still available; see the :doc:`megatron-lm` documentation. --- .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker **************************************** Training a model with Primus and PyTorch **************************************** `Primus `__ is a unified and flexible LLM training framework designed to streamline training. It streamlines LLM training on AMD Instinct GPUs using a modular, reproducible configuration paradigm. Primus now supports the PyTorch torchtitan backend. .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/pytorch-training `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including torchtitan and :doc:`Megatron-LM `. Primus with the PyTorch torchtitan backend is designed to replace the :doc:`ROCm PyTorch training ` workflow. See :doc:`pytorch-training` to see steps to run workloads without Primus. AMD provides a ready-to-use Docker image for MI355X, MI350X, MI325X, and MI300X GPUs containing essential components for Primus and PyTorch training with Primus Turbo optimizations. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-primus-pytorch-model-support-v25.11: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI325X and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. seealso:: For additional workloads, including Llama 3.3, Llama 3.2, Llama 2, GPT OSS, Qwen, and Flux models, see the documentation :doc:`pytorch-training` (without Primus) .. _amd-primus-pytorch-performance-measurements-v25.11: System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Pull the Docker image ===================== .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ data.docker.pull_tag }} Run training ============ Once the setup is complete, choose between the following two workflows to start benchmarking training. For fine-tuning workloads and multi-node training examples, see :doc:`pytorch-training` (without Primus). For best performance on MI325X, MI350X, and MI355X GPUs, you might need to tweak some configurations (such as batch sizes). .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/primus-pytorch-benchmark-models.yaml {% set docker = data.docker %} {% set model_groups = data.model_groups %} .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v25.11` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Primus benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run commands are tailored to {{ model.model }}. See :ref:`amd-primus-pytorch-model-support-v25.11` to switch to another available model. .. rubric:: Download the Docker image and required packages 1. Pull the ``{{ docker.pull_tag }}`` Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Run the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash The Docker container hosts verified commit ``c4c083de`` of the `Primus `__ repository. .. rubric:: Prepare training datasets and dependencies The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token .. rubric:: Pretraining To get started, navigate to the ``Primus`` directory in your container. .. code-block:: cd /workspace/Primus Now, to start the pretraining benchmark, use the ``run_pretrain.sh`` script included with Primus with the appropriate options. .. rubric:: Benchmarking examples .. container:: model-doc primus_pyt_train_llama-3.1-8b Use the following command to run train Llama 3.1 8B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 6 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh To train Llama 3.1 8B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_8B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 7 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_8B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh .. container:: model-doc primus_pyt_train_llama-3.1-70b Use the following command to run train Llama 3.1 70B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 6 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-BF16-pretrain.yaml \ bash examples/run_pretrain.sh To train Llama 3.1 70B with FP8 precision, use the following command. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 5 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/llama3.1_70B-FP8-pretrain.yaml \ bash examples/run_pretrain.sh .. container:: model-doc primus_pyt_train_deepseek-v3-16b Use the following command to run train DeepSeek V3 16B with BF16 precision using Primus torchtitan. .. tab-set:: .. tab-item:: MI355X and MI350X :sync: MI355X and MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI355X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh .. tab-item:: MI325X :sync: MI325X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh --training.local_batch_size 10 .. tab-item:: MI300X :sync: MI300X .. code-block:: shell EXP=examples/torchtitan/configs/MI300X/deepseek_v3_16b-pretrain.yaml \ bash examples/run_pretrain.sh {% endfor %} {% endfor %} Further reading =============== - For an introduction to Primus, see `Primus: A Lightweight, Unified Training Framework for Large Models on AMD GPUs `__. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- :orphan: .. meta:: :description: How to train a model using PyTorch for ROCm. :keywords: ROCm, AI, LLM, train, PyTorch, torch, Llama, flux, tutorial, docker ************************************** Training a model with PyTorch on ROCm ************************************** .. note:: For a unified training solution on AMD GPUs with ROCm, the `rocm/pytorch-training `__ Docker Hub registry will be deprecated soon in favor of `rocm/primus `__. The ``rocm/primus`` Docker containers will cover PyTorch training ecosystem frameworks, including torchtitan and :doc:`Megatron-LM `. See :doc:`primus-pytorch` for details. PyTorch is an open-source machine learning framework that is widely used for model training with GPU-optimized components for transformer-based models. The PyTorch for ROCm training Docker image provides a prebuilt optimized environment for fine-tuning and pretraining a model on AMD Instinct MI325X and MI300X GPUs. It includes the following software components to accelerate training workloads: .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml .. tab-set:: .. tab-item:: {{ data.docker.pull_tag }} :sync: {{ data.docker.pull_tag }} .. list-table:: :header-rows: 1 * - Software component - Version {% for component_name, component_version in data.docker.components.items() %} * - {{ component_name }} - {{ component_version }} {% endfor %} .. _amd-pytorch-training-model-support-v25.11: Supported models ================ The following models are pre-optimized for performance on the AMD Instinct MI355X, MI350X, MI325X, and MI300X GPUs. Some instructions, commands, and training recommendations in this documentation might vary by model -- select one to get started. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml {% set model_groups = data.model_groups %} .. raw:: html
Model
{% for model_group in model_groups %}
{{ model_group.group }}
{% endfor %}
Variant
{% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if models|length % 3 == 0 %}
{{ model.model }}
{% else %}
{{ model.model }}
{% endif %} {% endfor %} {% endfor %}
.. _amd-pytorch-training-supported-training-modes-v25.11: The following table lists supported training modes per model. .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml {% set model_groups = data.model_groups %} .. dropdown:: Supported training modes .. list-table:: :header-rows: 1 * - Model - Supported training modes {% for model_group in model_groups %} {% set models = model_group.models %} {% for model in models %} {% if model.training_modes %} * - {{ model.model }} - ``{{ model.training_modes | join('``, ``') }}`` {% endif %} {% endfor %} {% endfor %} .. note:: Some model and fine-tuning combinations are not listed. This is because the `upstream torchtune repository `__ doesn't provide default YAML configurations for them. For advanced usage, you can create a custom configuration to enable unlisted fine-tuning methods by using an existing file in the ``/workspace/torchtune/recipes/configs`` directory as a template. .. _amd-pytorch-training-performance-measurements-v25.11: Performance measurements ======================== To evaluate performance, the `Performance results with AMD ROCm software `_ page provides reference throughput and latency measurements for training popular AI models. .. note:: The performance data presented in `Performance results with AMD ROCm software `_ should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software. System validation ================= Before running AI workloads, it's important to validate that your AMD hardware is configured correctly and performing optimally. If you have already validated your system settings, including aspects like NUMA auto-balancing, you can skip this step. Otherwise, complete the procedures in the :ref:`System validation and optimization ` guide to properly configure your system settings before starting training. To test for optimal performance, consult the recommended :ref:`System health benchmarks `. This suite of tests will help you verify and fine-tune your system's configuration. This Docker image is optimized for specific model configurations outlined below. Performance can vary for other training workloads, as AMD doesn’t test configurations and run conditions outside those described. Run training ============ .. datatemplate:yaml:: /data/how-to/rocm-for-ai/training/pytorch-training-benchmark-models.yaml {% set docker = data.docker %} {% set model_groups = data.model_groups %} Once the setup is complete, choose between two options to start benchmarking training: .. tab-set:: .. tab-item:: MAD-integrated benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following run command is tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support-v25.11` to switch to another available model. 1. Clone the ROCm Model Automation and Dashboarding (``__) repository to a local directory and install the required packages on the host machine. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD pip install -r requirements.txt 2. For example, use this command to run the performance benchmark test on the {{ model.model }} model using one node with the {{ model.precision }} data type on the host machine. .. code-block:: shell export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models" madengine run \ --tags {{ model.mad_tag }} \ --keep-model-dir \ --live-output \ --timeout 28800 MAD launches a Docker container with the name ``container_ci-{{ model.mad_tag }}``. The latency and throughput reports of the model are collected in ``~/MAD/perf.csv``. {% endfor %} {% endfor %} .. tab-item:: Standalone benchmarking {% for model_group in model_groups %} {% for model in model_group.models %} .. container:: model-doc {{ model.mad_tag }} The following commands are tailored to {{ model.model }}. See :ref:`amd-pytorch-training-model-support-v25.11` to switch to another available model. {% endfor %} {% endfor %} .. rubric:: Download the Docker image and required packages 1. Use the following command to pull the Docker image from Docker Hub. .. code-block:: shell docker pull {{ docker.pull_tag }} 2. Launch the Docker container. .. code-block:: shell docker run -it \ --device /dev/dri \ --device /dev/kfd \ --network host \ --ipc host \ --group-add video \ --cap-add SYS_PTRACE \ --security-opt seccomp=unconfined \ --privileged \ -v $HOME:$HOME \ -v $HOME/.ssh:/root/.ssh \ --shm-size 64G \ --name training_env \ {{ docker.pull_tag }} Use these commands if you exit the ``training_env`` container and need to return to it. .. code-block:: shell docker start training_env docker exec -it training_env bash 3. In the Docker container, clone the ``__ repository and navigate to the benchmark scripts directory ``/workspace/MAD/scripts/pytorch_train``. .. code-block:: shell git clone https://github.com/ROCm/MAD cd MAD/scripts/pytorch_train .. rubric:: Prepare training datasets and dependencies 1. The following benchmarking examples require downloading models and datasets from Hugging Face. To ensure successful access to gated repos, set your ``HF_TOKEN``. .. code-block:: shell export HF_TOKEN=$your_personal_hugging_face_access_token 2. Run the setup script to install libraries and datasets needed for benchmarking. .. code-block:: shell ./pytorch_benchmark_setup.sh .. container:: model-doc pyt_train_llama-3.1-8b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 8B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 .. container:: model-doc pyt_train_llama-3.1-70b ``pytorch_benchmark_setup.sh`` installs the following libraries for Llama 3.1 70B: .. list-table:: :header-rows: 1 * - Library - Reference * - ``datasets`` - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - `TorchData `__ * - ``tomli`` - `Tomli `__ * - ``tiktoken`` - `tiktoken `__ * - ``blobfile`` - `blobfile `__ * - ``tabulate`` - `tabulate `__ * - ``wandb`` - `Weights & Biases `__ * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 .. container:: model-doc pyt_train_flux ``pytorch_benchmark_setup.sh`` installs the following libraries for FLUX: .. list-table:: :header-rows: 1 * - Library - Reference * - ``accelerate`` - `Hugging Face Accelerate `_ * - ``datasets`` - `Hugging Face Datasets `__ 3.2.0 * - ``sentencepiece`` - `SentencePiece `__ 0.2.0 * - ``tensorboard`` - `TensorBoard `__ 2.18.0 * - ``csvkit`` - `csvkit `__ 2.0.1 * - ``deepspeed`` - `DeepSpeed `__ 0.16.2 * - ``diffusers`` - `Hugging Face Diffusers `__ 0.31.0 * - ``GitPython`` - `GitPython `__ 3.1.44 * - ``opencv-python-headless`` - `opencv-python-headless `__ 4.10.0.84 * - ``peft`` - `PEFT `__ 0.14.0 * - ``protobuf`` - `Protocol Buffers `__ 5.29.2 * - ``pytest`` - `PyTest `__ 8.3.4 * - ``python-dotenv`` - `python-dotenv `__ 1.0.1 * - ``seaborn`` - `Seaborn `__ 0.13.2 * - ``transformers`` - `Transformers `__ 4.47.0 ``pytorch_benchmark_setup.sh`` downloads the following datasets from Hugging Face: * `frank-chieng/chinese_architecture_siheyuan `__ {% for model_group in model_groups %} {% for model in model_group.models %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "pretrain": "Benchmark pre-training.", "HF_pretrain": "Llama 3.1 8B pre-training with FP8 precision." } %} {% set available_modes = training_modes | select("in", ["pretrain", "HF_pretrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Pretraining To start the pre-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. {% if model.mad_tag == "pyt_train_dlrm" %} 1. Go to the DLRM directory. .. code-block:: shell cd /workspace/DLRMBenchmark 2. To run the single node training benchmark for DLRM-v2 with TF32 precision, run the following script. .. code-block:: shell ./launch_training_single_node.sh To run with MAD within the Docker container, use the following command. .. code-block:: shell ./pytorch_benchmark_report.sh -t pretrain -m DLRM {% else %} .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length {% if model.mad_tag == "pyt_train_flux" %} .. container:: model-doc {{ model.mad_tag }} .. note:: Currently, FLUX models are not supported out-of-the-box on this Docker. To use FLUX, refer to ``rocm/pytorch-training`` Docker: :doc:`previous-versions/pytorch-training-v25.6` Occasionally, downloading the Flux dataset might fail. In the event of this error, manually download it from Hugging Face at `black-forest-labs/FLUX.1-dev `_ and save it to `/workspace/FluxBenchmark`. This ensures that the test script can access the required dataset. {% endif %} .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% endif %} {% set training_modes = model.training_modes %} {% set training_mode_descs = { "posttrain": "Benchmark post-training.", } %} {% set available_modes = training_modes | select("in", ["posttrain"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Post-training To start the post-training benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. .. code-block:: shell ./pytorch_benchmark_report.sh -t {% if available_modes | length == 1 %}{{ available_modes[0] }}{% else %}$training_mode{% endif %} \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if model.mad_tag == "pyt_train_llama-3.1-8b" %} or ``FP8``{% endif %} - Only Llama 3.1 8B supports FP8 precision. * - ``$sequence_length`` - Sequence length for the language model. - Between 2048 and 8192. 8192 by default. {% endif %} {% set training_mode_descs = { "finetune_fw": "Full weight fine-tuning (BF16 and FP8 supported).", "finetune_lora": "LoRA fine-tuning (BF16 supported).", "finetune_qlora": "QLoRA fine-tuning (BF16 supported).", "HF_finetune_lora": "LoRA fine-tuning with Hugging Face PEFT.", } %} {% set available_modes = training_modes | select("in", ["finetune_fw", "finetune_lora", "finetune_qlora", "HF_finetune_lora"]) | list %} {% if available_modes %} .. container:: model-doc {{ model.mad_tag }} .. rubric:: Fine-tuning To start the fine-tuning benchmark, use the following command with the appropriate options. See the following list of options and their descriptions. See :ref:`supported training modes `. .. code-block:: shell ./pytorch_benchmark_report.sh -t $training_mode \ -m {{ model.model_repo }} \ -p $datatype \ -s $sequence_length .. list-table:: :header-rows: 1 * - Name - Options - Description {% for mode in available_modes %} * - {% if loop.first %}``$training_mode``{% endif %} - ``{{ mode }}`` - {{ training_mode_descs[mode] }} {% endfor %} * - ``$datatype`` - ``BF16``{% if "finetune_fw" in available_modes %} or ``FP8``{% endif %} - All models support BF16.{% if "finetune_fw" in available_modes %} FP8 is only available for full weight fine-tuning.{% endif %} * - ``$sequence_length`` - Between 2048 and 16384. - Sequence length for the language model. {% if model.mad_tag in ["pyt_train_llama3.2-vision-11b", "pyt_train_llama-3.2-vision-90b"] %} .. note:: For LoRA and QLoRA support with vision models (Llama 3.2 11B and 90B), use the following torchtune commit for compatibility: .. code-block:: shell git checkout 48192e23188b1fc524dd6d127725ceb2348e7f0e {% elif model.mad_tag in ["pyt_train_llama-2-7b", "pyt_train_llama-2-13b", "pyt_train_llama-2-70b"] %} .. note:: You might encounter the following error with Llama 2: ``ValueError: seq_len (16384) of input tensor should be smaller than max_seq_len (4096)``. This error indicates that an input sequence is longer than the model's maximum context window. Ensure your tokenized input does not exceed the model's ``max_seq_len`` (4096 tokens in this case). You can resolve this by truncating the input or splitting it into smaller chunks before passing it to the model. Note on reproducibility: The results in this guide are based on commit ``b4c98ac`` from the upstream ``__ repository. For the latest updates, you can use the main branch. {% endif %} {% endif %} {% endfor %} {% endfor %} .. rubric:: Benchmarking examples For examples of benchmarking commands, see ``__. .. _amd-pytorch-training-multinode-examples-v25.11: Multi-node training ------------------- Refer to :doc:`/how-to/rocm-for-ai/system-setup/multi-node-setup` to configure your environment for multi-node training. See :ref:`rocm-for-ai-multi-node-setup-pyt-train-example` for example Slurm run commands. Pre-training ~~~~~~~~~~~~ Multi-node training with torchtitan is supported. The provided SLURM script is pre-configured for Llama 3 70B. To launch the training job on a SLURM cluster for Llama 3 70B, run the following commands from the MAD repository. .. code-block:: shell # In the MAD repository cd scripts/pytorch_train sbatch run_slurm_train.sh Fine-tuning ~~~~~~~~~~~ Multi-node training with torchtune is supported. The provided SLURM script is pre-configured for Llama 3.3 70B. To launch the training job on a SLURM cluster for Llama 3.3 70B, run the following commands from the MAD repository. .. code-block:: shell huggingface-cli login # Get access to HF Llama model space huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --local-dir ./models/Llama-3.3-70B-Instruct # Download the Llama 3.3 model locally # In the MAD repository cd scripts/pytorch_train sbatch Torchtune_Multinode.sh .. note:: Information regarding benchmark setup: * By default, Llama 3.3 70B is fine-tuned using ``alpaca_dataset``. * You can adjust the torchtune `YAML configuration file `__ if you're using a different model. * The number of nodes and other parameters can be tuned in the SLURM script ``Torchtune_Multinode.sh``. * Set the ``mounting_paths`` inside the SLURM script. Once the run is finished, you can find the log files in the ``result_torchtune/`` directory. Further reading =============== - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide `__. - To learn more about system settings and management practices to configure your system for AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization `_. - For a list of other ready-made Docker images for AI with ROCm, see `AMD Infinity Hub `_. Previous versions ================= See :doc:`previous-versions/pytorch-training-history` to find documentation for previous releases of the ``ROCm/pytorch-training`` Docker image. --- .. meta:: :description: How to use ROCm for training models :keywords: ROCm, LLM, training, GPUs, training model, scaling model, usage, tutorial ======================= Use ROCm for training ======================= Training models is the process of teaching a computer program to recognize patterns in data. This involves providing the computer with large amounts of labeled data and allowing it to learn from that data, adjusting the model's parameters. The process of training models is computationally intensive, requiring specialized hardware like GPUs to accelerate computations and reduce training time. Training models on AMD GPUs involves leveraging the parallel processing capabilities of these GPUs to significantly speed up the model training process in machine learning and deep learning tasks. Training models on AMD GPUs with the ROCm™ software platform allows you to use the powerful parallel processing capabilities and efficient compute resource management, significantly improving training time and overall performance in machine learning applications. The ROCm software platform makes it easier to train models on AMD GPUs while maintaining compatibility with existing code and tools. The platform also provides features like multi-GPU support, allowing for scaling and parallelization of model training across multiple GPUs to enhance performance. The AI Developer Hub contains `AMD ROCm tutorials `_ for training, fine-tuning, and inference. It leverages popular machine learning frameworks on AMD GPUs. In this guide, you'll learn about: - Training a model - :doc:`With Primus (Megatron-LM backend) ` - :doc:`With Megatron-LM ` - :doc:`With PyTorch ` - :doc:`With JAX MaxText ` - :doc:`With LLM Foundry ` - :doc:`Scaling model training ` --- .. meta:: :description: How to scale and accelerate model training :keywords: ROCm, AI, LLM, train, fine-tune, deploy, FSDP, DeepSpeed, LLaMA, tutorial ********************** Scaling model training ********************** To train a large-scale model like OpenAI GPT-2 or Meta Llama 2 70B, a single GPU cannot store all the model parameters required for training. This immense scale presents a fundamental challenge: no single GPU can simultaneously store and process the entire model's parameters during training. PyTorch provides an answer to this computational constraint through its distributed training frameworks. .. _rocm-for-ai-pytorch-distributed: PyTorch distributed =================== Features in ``torch.distributed`` are categorized into three main components: - `Distributed data-parallel training `_ (DDP) - `RPC-Based distributed training `_ (RPC) - `Collective communication `_ In this topic, the focus is on the distributed data-parallelism strategy as it’s the most popular. To get started with DDP, you need to first understand how to coordinate the model and its training data across multiple GPUs. The DDP workflow on multiple GPUs is as follows: #. Split the current global training batch into small local batches on each GPU. For instance, if you have 8 GPUs and the global batch is set at 32 samples, each of the 8 GPUs will have a local batch size of 4 samples. #. Copy the model to every device so each can process its local batches independently. #. Run a forward pass, then a backward pass, and output the gradient of the weights with respect to the loss of the model for that local batch. This happens in parallel on multiple devices. #. Synchronize the local gradients computed by each device and combine them to update the model weights. The updated weights are then redistributed to each device. In DDP training, each process or worker owns a replica of the model and processes a batch of data, and then the reducer uses ``allreduce`` to sum up gradients over different workers. See the following developer blogs for more in-depth explanations and examples. * `Multi GPU training with DDP — PyTorch Tutorials `__ * `Building a decoder transformer model on AMD GPUs — ROCm Blogs `_ .. _rocm-for-ai-pytorch-fsdp: PyTorch FSDP ------------ As noted in :ref:`PyTorch distributed `, DDP model weights and optimizer states are evenly replicated across all workers. Fully Sharded Data Parallel (FSDP) is a type of data parallelism that shards model parameters, optimizer states, and gradients across DDP ranks. When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes training some very large models feasible by allowing larger models or batch sizes to fit on-device. However, this comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation. For a high-level overview of how FSDP works, review `Getting started with Fully Sharded Data Parallel `_. For detailed training steps, see `PyTorch FSDP examples `_. .. _rocm-for-ai-deepspeed: DeepSpeed --------- `DeepSpeed `_ offers system innovations that make large-scale deep learning training effective, efficient, and easy to use. Innovations such as ZeRO, 3D-Parallelism, DeepSpeed-MoE, ZeRO-Infinity, and so on fall under the training pillar. See `Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs `_ for a detailed example of training with DeepSpeed on an AMD GPU. .. _rocm-for-ai-automatic-mixed-precision: Automatic mixed precision (AMP) ------------------------------- As models increase in size, so do the time and memory needed to train them; their cost also increases. Any measure we can take to reduce training time and memory usage through `automatic mixed precision `_ (AMP) is highly beneficial for most use cases. See `Automatic mixed precision in PyTorch using AMD GPUs — ROCm Blogs `_ for more information about running AMP on an AMD Instinct-Series GPU. .. _rocm-for-ai-fine-tune: Fine-tuning your model ====================== ROCm supports multiple techniques for :ref:`optimizing fine-tuning `, for example, LoRA, QLoRA, PEFT, and FSDP. Learn more about challenges and solutions for model fine-tuning in :doc:`../fine-tuning/index`. The following developer blogs showcase examples of fine-tuning a model on an AMD GPU. * Fine-tuning Llama2 with LoRA * `Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering `_ * Fine-tuning Llama2 with QLoRA * `Enhancing LLM accessibility: A deep dive into QLoRA through fine-tuning Llama 2 on a single AMD GPU `_ * Fine-tuning a BERT-based LLM for a text classification task using JAX * `LLM distributed supervised fine-tuning with JAX `_ * Fine-tuning StarCoder using PEFT * `Instruction fine-tuning of StarCoder with PEFT on multiple AMD GPUs `_ * Recipes for fine-tuning Llama2 and 3 with ``llama-recipes`` * `meta-llama/llama-recipes: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs `_ --- .. meta:: :description: How to use ROCm for high-performance computing (HPC). :keywords: ROCm, AI, high performance computing, HPC, science, scientific ****************** Using ROCm for HPC ****************** The ROCm open-source software stack is optimized to extract high-performance computing (HPC) workload performance from AMD Instinct™ GPUs while maintaining compatibility with industry software frameworks. ROCm enhances support and access for developers by providing streamlined and improved tools that significantly increase productivity. Being open-source, ROCm fosters innovation, differentiation, and collaboration within the developer community, making it a powerful and accessible solution for leveraging the full potential of AMD GPUs' capabilities in diverse computational applications. * For more information, see :doc:`What is ROCm? <../../what-is-rocm>`. * For guidance on installing ROCm, see :doc:`rocm-install-on-linux:index`. See the :doc:`Compatibility matrix <../../compatibility/compatibility-matrix>` for details on hardware and operating system support. Some of the most popular HPC frameworks are part of the ROCm platform, including those to help parallelize operations across multiple GPUs and servers, handle memory hierarchies, and solve linear systems. .. image:: ../../data/how-to/rocm-for-hpc/hpc-stack-2024_6_20.png :align: center :alt: Software and hardware ecosystem surrounding ROCm and AMD Instinct for HPC The following catalog of GPU-accelerated solutions includes a vast set of platform-compatible HPC applications, including those for astrophysics, climate and weather, computational chemistry, computational fluid dynamics, earth science, genomics, geophysics, molecular dynamics, and physics computing. Refer to the resources in the following table for instructions on building, running, and deploying these applications on ROCm-capable systems with AMD Instinct GPUs. Each build container provides parameters to specify different source code branches, release versions of ROCm, OpenMPI, UCX, and Ubuntu versions. .. _hpc-apps: .. Reduce font size of HPC app descriptions slightly. .. raw:: html .. container:: :name: hpc-apps-table .. list-table:: :header-rows: 1 :stub-columns: 1 :widths: 2 2 5 * - Application domain - HPC application - Description * - Physics - `Chroma `_ - The Chroma package supports data-parallel programming constructs for lattice field theory and in particular lattice QCD. It uses the SciDAC QDP++ data-parallel programming (in C++) that presents a single high-level code image to the user, but can generate highly optimized code for many architectural systems including single node workstations, multi and many-core nodes, clusters of nodes via QMP, and classic vector computers. * - - `Grid `_ - Grid is a library for lattice QCD calculations that employs a high-level data parallel approach while using a number of techniques to target multiple types of parallelism. The library currently supports MPI, OpenMP, and short vector parallelism. The SIMD instruction sets covered include SSE, AVX, AVX2, FMA4, IMCI, and AVX512. Recent releases expanded this support to include GPU offloading. * - - `MILC `_ - The MILC Code is a set of research codes developed by MIMD Lattice Computation (MILC) collaboration for doing simulations of four dimensional SU(3) lattice gauge theory on MIMD parallel machines scaling from single-processor workstations to HPC systems. The MILC Code is publicly available for research purposes. Publications of work done using this code or derivatives of this code should acknowledge this use. * - - `QUDA `_ - Library designed for efficient lattice QCD computations on GPUs. It includes optimized Dirac operators and a variety of fermion solvers and conjugate gradient (CG) implementations, enhancing performance and accuracy in lattice QCD simulations. * - - `PIConGPU `_ - PIConGPU (Particle-in-cell on Graphics Processing Units) is an Open Source simulations framework for plasma and laser-plasma physics used to develop advanced particle accelerators for radiation therapy of cancer, high energy physics and photon science. * - Astrophysics - `Cholla `_ - An astrophysical simulation code developed for the extreme environments encountered in astrophysical systems. * - Geophysics - `SPECFEM3D Cartesian `_ - SPECFEM3D Cartesian simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra (structured or not.) It can, for instance, model seismic waves propagating in sedimentary basins or any other regional geological model following earthquakes. It can also be used for non-destructive testing or for ocean acoustics. * - Molecular dynamics - `Amber `_ - Amber is a suite of biomolecular simulation programs. It is a set of molecular mechanical force fields for simulating biomolecules. Amber is also a package of molecular simulation programs which includes source code and demos. * - - `GROMACS with HIP (AMD implementation) `_ - GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This AMD container is based on a released version of GROMACS modified by AMD. This container only supports up to a 8 GPU configuration * - - `LAMMPS `_ - LAMMPS is a classical molecular dynamics code with a focus on materials modeling. It's an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. * - Computational fluid dynamics - `Ansys Fluent `_ - Ansys Fluent is an advanced computational fluid dynamics (CFD) tool for simulating and analyzing fluid flow, heat transfer, and related phenomena in complex systems. It offers a range of powerful features for detailed and accurate modeling of various physical processes, including turbulence, chemical reactions, and multiphase flows. * - - `NEKO `_ - Neko is a portable framework for high-order spectral element flow simulations. Written in modern Fortran, Neko adopts an object-oriented approach, allowing multi-tier abstractions of the solver stack and facilitating various hardware backends ranging from general-purpose processors, CUDA and HIP enabled accelerators to SX-Aurora vector processors. * - - `Simcenter Star-CCM+ `_ - Simcenter Star-CCM+ is a comprehensive computational fluid dynamics (CFD) and multiphysics simulation tool developed by Siemens Digital Industries Software. It is designed to help engineers and researchers analyze and optimize the performance of products and systems across various industries. * - Quantum Monte Carlo Simulation - `QMCPACK `_ - QMCPACK is an open-source production-level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, 2D nanomaterials and solids. The solid-state capabilities include metallic systems as well as insulators. QMCPACK is expected to run well on workstations through to the latest generation supercomputers. Besides high performance, particular emphasis is placed on code quality and reproducibility. * - Climate and weather - `MPAS `_ - The Model for Prediction Across Scales (MPAS) is a collaborative project for developing atmosphere, ocean, and other earth-system simulation components for use in climate, regional climate, and weather studies. * - Energy, Oil, and Gas - `DevitoPRO `_ - DevitoPRO is an advanced extension of the open-source Devito platform with added features tailored for high-demand production workflows. It supports high-performance computing (HPC) needs, especially in seismic imaging and inversion. It is used to perform optimized finite difference (FD) computations from high-level symbolic problem definitions. DevitoPro performs automated code generation and Just-In-time (JIT) compilation based on symbolic equations defined in SymPy to create and execute highly optimized Finite Difference stencil kernels on multiple computer platforms. * - Benchmark - `rocHPL `_ - HPL, or High-Performance Linpack, is a benchmark which solves a uniformly random system of linear equations and reports floating-point execution rate. * - - `rocHPL-MxP `_ - Benchmark that highlights the convergence of HPC and AI workloads by solving a system of linear equations using novel, mixed-precision algorithms. * - - `HPCG `_ - HPCG, or the High Performance Conjugate Gradient Benchmark complements the High Performance LINPACK (HPL) benchmark. The computational and data access patterns of HPCG are designed to closely match a broad set of important applications not represented by HPL, and to incentivize computer system designers to invest in capabilities that will benefit the collective performance of these applications. * - Tools and libraries - `AMD ROCm with OpenMPI container `_ - Base container for GPU-aware MPI with ROCm for HPC applications. This project provides a boilerplate for building and running a Docker container with ROCm supporting GPU-aware MPI implementations using OpenMPI or UCX. * - - `AMD ROCm with MPICH container `_ - Base container for GPU-aware MPI with ROCm for HPC applications. This project provides a boilerplate for building and running a Docker container with ROCm supporting GPU-aware MPI implementations using MPICH. * - - `AMD ROCm with Conda Environment Container `_ - Container recipe that uses the `base-gpu-mpi-rocm-docker` as the base and adds Conda. The container can be used as a base for applications that require conda applications. * - - `Kokkos `_ - Kokkos is a programming model in C++ for writing performance portable applications for use across HPC platforms. It provides abstractions for both parallel execution of code and data management. Kokkos is designed to target complex node architectures with N-level memory hierarchies and multiple types of execution resources. * - - `PyFR `_ - PyFR is an open-source Python based framework for solving advection-diffusion type problems on streaming architectures using the Flux Reconstruction approach of Huynh. The framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types. It is also designed to target a range of hardware platforms via use of an in-built domain specific language derived from the Mako templating engine. * - - `RAJA `_ - RAJA is a library of C++ software abstractions, primarily developed at Lawrence Livermore National Laboratory (LLNL), that enables architecture and programming model portability for HPC applications. * - - `Trilinos `_ - The Trilinos Project is an effort to develop algorithms and enabling technologies within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. * - - `VLLM `_ - The VLLM project helps to build a Dockerfile for performance testing of the LLAMA2 applications. This Dockerfile uses a base install that includes Ubuntu 20.04, ROCm 6.1.2 and Python 3.9. The container can host the LLAMA2 applications (LLMs) and requires some large input files for testing. To learn about ROCm for AI applications, see :doc:`../rocm-for-ai/index`. --- .. meta:: :description: Setting the number of CUs :keywords: CU, CUs, number of CUs, compute units .. _settings-cus-reference: ************************************************************* Setting the number of compute units ************************************************************* The GPU driver provides two environment variables to set the number of CUs used: - ``HSA_CU_MASK`` - ``ROC_GLOBAL_CU_MASK`` The ``ROC_GLOBAL_CU_MASK`` variable sets the CU mask on queues created by HIP or OpenCL runtimes. The ``HSA_CU_MASK`` variable sets the mask on a lower level of queue creation in the driver. It also sets the mask on the queues being profiled. .. tip:: When using GPUs to accelerate compute workloads, it sometimes becomes necessary to configure the hardware's usage of compute units (CU). This is a more advanced option, so please read this page before experimentation. The environment variables have the following syntax: :: ID = [0-9][0-9]* ex. base 10 numbers ID_list = (ID | ID-ID)[, (ID | ID-ID)]* ex. 0,2-4,7 GPU_list = ID_list ex. 0,2-4,7 CU_list = 0x[0-F]* | ID_list ex. 0x337F OR 0,2-4,7 CU_Set = GPU_list : CU_list ex. 0,2-4,7:0-15,32-47 OR 0,2-4,7:0x337F HSA_CU_MASK = CU_Set [; CU_Set]* ex. 0,2-4,7:0-15,32-47; 3-9:0x337F The GPU indices are taken post ``ROCR_VISIBLE_DEVICES`` reordering. The listed or masked CUs are enabled for listed GPUs, and the others are disabled. Unlisted GPUs are not be affected, and their CUs are enabled. The variable parsing stops when a syntax error occurs. The erroneous set and the following are ignored. Repeating GPU or CU IDs results in a syntax error. Specifying a mask with no usable CUs (CU_list is 0x0) results in a syntax error. To exclude GPU devices, use ``ROCR_VISIBLE_DEVICES``. .. note:: These environment variables only affect ROCm software, not graphics applications. Not all CU configurations are valid on all devices. For example, on devices where two CUs can be combined into a WGP (for kernels running in WGP mode), it’s not valid to disable only a single CU in a WGP. --- .. meta:: :description: Learn about AMD hardware optimization for HPC-specific and workstation workloads. :keywords: high-performance computing, HPC, Instinct GPUs, Radeon, tuning, tuning guide, AMD, ROCm ******************* System optimization ******************* This guide outlines system setup and tuning suggestions for AMD hardware to optimize performance for specific types of workloads or use-cases. The contents are structured according to the hardware: .. grid:: 2 .. grid-item-card:: AMD RDNA * :doc:`AMD RDNA2 system optimization ` .. grid-item-card:: AMD Instinct * `AMD Instinct MI300X `_ * `AMD Instinct MI300A `_ * `AMD Instinct MI200 `_ * `AMD Instinct MI100 `_ --- :orphan: .. meta:: :description: How to configure MI300X GPUs to fully leverage their capabilities and achieve optimal performance. :keywords: ROCm, AI, machine learning, MI300X, LLM, usage, tutorial, optimization, tuning ************************ AMD MI300X tuning guides ************************ The tuning guides in this section provide a comprehensive summary of the necessary steps to properly configure your system for AMD Instinct™ MI300X GPUs. They include detailed instructions on system settings and application tuning suggestions to help you fully leverage the capabilities of these GPUs, thereby achieving optimal performance. * :doc:`/how-to/rocm-for-ai/inference-optimization/workload` * `AMD Instinct MI300X system optimization `_ --- .. meta:: :description: Environment variables reference :keywords: AMD, ROCm, environment variables, environment, reference, settings .. role:: cpp(code) :language: cpp .. _env-variables-reference: ************************************************************* ROCm environment variables ************************************************************* ROCm provides a set of environment variables that allow users to configure and optimize their development and runtime experience. These variables define key settings such as installation paths, platform selection, and runtime behavior for applications running on AMD accelerators and GPUs. This page outlines commonly used environment variables across different components of the ROCm software stack, including HIP and ROCR-Runtime. Understanding these variables can help streamline software development and execution in ROCm-based environments. HIP environment variables ========================= The following tables list the HIP environment variables. GPU isolation variables -------------------------------------------------------------------------------- .. remote-content:: :repo: ROCm/rocm-systems :path: /projects/hip/docs/reference/env_variables/gpu_isolation_hip_env.rst :default_branch: develop :tag_prefix: docs/ Profiling variables -------------------------------------------------------------------------------- .. remote-content:: :repo: ROCm/rocm-systems :path: /projects/hip/docs/reference/env_variables/profiling_hip_env.rst :default_branch: develop :tag_prefix: docs/ Debug variables -------------------------------------------------------------------------------- .. remote-content:: :repo: ROCm/rocm-systems :path: /projects/hip/docs/reference/env_variables/debug_hip_env.rst :default_branch: develop :tag_prefix: docs/ Memory management related variables -------------------------------------------------------------------------------- .. remote-content:: :repo: ROCm/rocm-systems :path: /projects/hip/docs/reference/env_variables/memory_management_hip_env.rst :default_branch: develop :tag_prefix: docs/ Other useful variables -------------------------------------------------------------------------------- .. remote-content:: :repo: ROCm/rocm-systems :path: /projects/hip/docs/reference/env_variables/miscellaneous_hip_env.rst :default_branch: develop :tag_prefix: docs/ ROCR-Runtime environment variables ================================== The following table lists the ROCR-Runtime environment variables: .. remote-content:: :repo: ROCm/rocm-systems :path: /projects/rocr-runtime/runtime/docs/data/env_variables.rst :default_branch: develop :tag_prefix: docs/ HIPCC environment variables =========================== This topic provides descriptions of the HIPCC environment variables. .. remote-content:: :repo: ROCm/llvm-project :path: amd/hipcc/docs/env.rst :default_branch: amd-staging :start_line: 14 :tag_prefix: docs/ Environment variables in ROCm libraries ======================================= Many ROCm libraries define environment variables for specific tuning, debugging, or behavioral control. The table below provides an overview and links to further documentation. .. list-table:: :header-rows: 1 :widths: 30, 70 * - Library - Purpose of Environment Variables * - :doc:`hipBLASLt ` - Manage logging, debugging, offline tuning, and stream-K configuration for hipBLASLt. * - :doc:`hipSPARSELt ` - Control logging, debugging and performance monitoring of hipSPARSELt. * - :doc:`rocBLAS ` - Performance tuning, kernel selection, logging, and debugging for BLAS operations. * - :doc:`rocSolver ` - Control logging of rocSolver. * - :doc:`rocSPARSE ` - Control logging of rocSPARSE. * - :doc:`MIGraphX ` - Control debugging, testing, and model performance tuning options for MIGraphX. * - :doc:`MIOpen ` - Control MIOpen logging and debugging, find mode and algorithm behavior and others. * - :doc:`MIVisionX ` - Control core OpenVX, GPU/device and debugging/profiling, stitching and chroma key configurations, file I/O operations, model deployment, and neural network parameters of MIVisionX. * - :doc:`RCCL ` - Control the logging, debugging, compiler and assembly behavior, and cache of RPP. * - :doc:`RPP ` - Logging, debugging, compiler and assembly management, and cache control in RPP * - `Tensile `_ - Enable testing, debugging, and experimental features for Tensile clients and applications Key single-variable details =========================== This section provides detailed descriptions, in the standard format, for ROCm libraries that feature a single, key environment variable (or a very minimal set) which is documented directly on this page for convenience. .. _rocalution-vars-detail: rocALUTION ---------- .. list-table:: :header-rows: 1 :widths: 70,30 * - Environment variable - Value * - | ``ROCALUTION_LAYER`` | If set to ``1``, enable file logging. Logs each rocALUTION function call including object constructor/destructor, address of the object, memory allocation, data transfers, all function calls for matrices, vectors, solvers, and preconditioners. The log file is placed in the working directory. - | ``1`` (Enable trace file logging) | Default: Not set. --- .. meta:: :description: AMD Instinct™ GPU, AMD Radeon PRO™, and AMD Radeon™ GPU architecture information :keywords: Instinct, Radeon, accelerator, GCN, CDNA, RDNA, GPU, architecture, VRAM, Compute Units, Cache, Registers, LDS, Register File GPU hardware specifications =========================================== The following tables provide an overview of the hardware specifications for AMD Instinct™ GPUs, and AMD Radeon™ PRO and Radeon™ GPUs. For more information about ROCm hardware compatibility, see the ROCm `Compatibility matrix `_. .. tab-set:: .. tab-item:: AMD Instinct GPUs .. list-table:: :header-rows: 1 :name: instinct-arch-spec-table * - Model - Architecture - LLVM target name - VRAM (GiB) - Compute Units - Wavefront Size - LDS (KiB) - L3 Cache (MiB) - L2 Cache (MiB) - L1 Vector Cache (KiB) - L1 Scalar Cache (KiB) - L1 Instruction Cache (KiB) - VGPR File (KiB) - SGPR File (KiB) - GFXIP Major version - GFXIP Minor version * - MI355X - CDNA4 - gfx950 - 288 - 256 (32 per XCD) - 64 - 160 - 256 - 32 (4 per XCD) - 32 - 16 per 2 CUs - 64 per 2 CUs - 512 - 12.5 - 9 - 5 * - MI350X - CDNA4 - gfx950 - 288 - 256 (32 per XCD) - 64 - 160 - 256 - 32 (4 per XCD) - 32 - 16 per 2 CUs - 64 per 2 CUs - 512 - 12.5 - 9 - 5 * - MI325X - CDNA3 - gfx942 - 256 - 304 (38 per XCD) - 64 - 64 - 256 - 32 (4 per XCD) - 32 - 16 per 2 CUs - 64 per 2 CUs - 512 - 12.5 - 9 - 4 * - MI300X - CDNA3 - gfx942 - 192 - 304 (38 per XCD) - 64 - 64 - 256 - 32 (4 per XCD) - 32 - 16 per 2 CUs - 64 per 2 CUs - 512 - 12.5 - 9 - 4 * - MI300A - CDNA3 - gfx942 - 128 - 228 (38 per XCD) - 64 - 64 - 256 - 24 (4 per XCD) - 32 - 16 per 2 CUs - 64 per 2 CUs - 512 - 12.5 - 9 - 4 * - MI250X - CDNA2 - gfx90a - 128 - 220 (110 per GCD) - 64 - 64 - - 16 (8 per GCD) - 16 - 16 per 2 CUs - 32 per 2 CUs - 512 - 12.5 - 9 - 0 * - MI250 - CDNA2 - gfx90a - 128 - 208 (104 per GCD) - 64 - 64 - - 16 (8 per GCD) - 16 - 16 per 2 CUs - 32 per 2 CUs - 512 - 12.5 - 9 - 0 * - MI210 - CDNA2 - gfx90a - 64 - 104 - 64 - 64 - - 8 - 16 - 16 per 2 CUs - 32 per 2 CUs - 512 - 12.5 - 9 - 0 * - MI100 - CDNA - gfx908 - 32 - 120 - 64 - 64 - - 8 - 16 - 16 per 3 CUs - 32 per 3 CUs - 256 VGPR and 256 AccVGPR - 12.5 - 9 - 0 * - MI60 - GCN5.1 - gfx906 - 32 - 64 - 64 - 64 - - 4 - 16 - 16 per 3 CUs - 32 per 3 CUs - 256 - 12.5 - 9 - 0 * - MI50 (32GB) - GCN5.1 - gfx906 - 32 - 60 - 64 - 64 - - 4 - 16 - 16 per 3 CUs - 32 per 3 CUs - 256 - 12.5 - 9 - 0 * - MI50 (16GB) - GCN5.1 - gfx906 - 16 - 60 - 64 - 64 - - 4 - 16 - 16 per 3 CUs - 32 per 3 CUs - 256 - 12.5 - 9 - 0 * - MI25 - GCN5.0 - gfx900 - 16  - 64 - 64 - 64  - - 4  - 16  - 16 per 3 CUs - 32 per 3 CUs - 256 - 12.5 - 9 - 0 * - MI8 - GCN3.0 - gfx803 - 4 - 64 - 64 - 64 - - 2 - 16 - 16 per 4 CUs - 32 per 4 CUs - 256 - 12.5 - 8 - 0 * - MI6 - GCN4.0 - gfx803 - 16 - 36 - 64 - 64 - - 2 - 16 - 16 per 4 CUs - 32 per 4 CUs - 256 - 12.5 - 8 - 0 .. tab-item:: AMD Radeon PRO GPUs .. list-table:: :header-rows: 1 :name: radeon-pro-arch-spec-table * - Model - Architecture - LLVM target name - VRAM (GiB) - Compute Units - Wavefront Size - LDS (KiB) - Infinity Cache (MiB) - L2 Cache (MiB) - Graphics L1 Cache (KiB) - L0 Vector Cache (KiB) - L0 Scalar Cache (KiB) - L0 Instruction Cache (KiB) - VGPR File (KiB) - SGPR File (KiB) - GFXIP Major version - GFXIP Minor version * - Radeon AI PRO R9700 - RDNA4 - gfx1201 - 32 - 64 - 32 or 64 - 128 - 64 - 8 - N/A - 32 - 16 - 32 - 768 - 32 - 12 - 0 * - Radeon PRO V710 - RDNA3 - gfx1101 - 28 - 54 - 32 or 64 - 128 - 56 - 4 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon PRO W7900 Dual Slot - RDNA3 - gfx1100 - 48 - 96 - 32 or 64 - 128 - 96 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon PRO W7900 - RDNA3 - gfx1100 - 48 - 96 - 32 or 64 - 128 - 96 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon PRO W7800 48GB - RDNA3 - gfx1100 - 48 - 70 - 32 or 64 - 128 - 96 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon PRO W7800 - RDNA3 - gfx1100 - 32 - 70 - 32 or 64 - 128 - 64 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon PRO W7700 - RDNA3 - gfx1101 - 16 - 48 - 32 or 64 - 128 - 64 - 4 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon PRO W6800 - RDNA2 - gfx1030 - 32 - 60 - 32 or 64 - 128 - 128 - 4 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon PRO W6600 - RDNA2 - gfx1032 - 8 - 28 - 32 or 64 - 128 - 32 - 2 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon PRO V620 - RDNA2 - gfx1030 - 32 - 72 - 32 or 64 - 128 - 128 - 4 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon Pro W5500 - RDNA - gfx1012 - 8 - 22 - 32 or 64 - 128 - - 4 - 128 - 16 - 16 - 32 - 512 - 20 - 10 - 1 * - Radeon Pro VII - GCN5.1 - gfx906 - 16 - 60 - 64 - 64 - - 4 - - 16 - 16 per 3 CUs - 32 per 3 CUs - 256 - 12.5 - 9 - 0 .. tab-item:: AMD Radeon GPUs .. list-table:: :header-rows: 1 :name: radeon-arch-spec-table * - Model - Architecture - LLVM target name - VRAM (GiB) - Compute Units - Wavefront Size - LDS (KiB) - Infinity Cache (MiB) - L2 Cache (MiB) - Graphics L1 Cache (KiB) - L0 Vector Cache (KiB) - L0 Scalar Cache (KiB) - L0 Instruction Cache (KiB) - VGPR File (KiB) - SGPR File (KiB) - GFXIP Major version - GFXIP Minor version * - Radeon RX 9070 XT - RDNA4 - gfx1201 - 16 - 64 - 32 or 64 - 128 - 64 - 8 - N/A - 32 - 16 - 32 - 768 - 32 - 12 - 0 * - Radeon RX 9070 GRE - RDNA4 - gfx1201 - 16 - 48 - 32 or 64 - 128 - 48 - 6 - N/A - 32 - 16 - 32 - 768 - 32 - 12 - 0 * - Radeon RX 9070 - RDNA4 - gfx1201 - 16 - 56 - 32 or 64 - 128 - 64 - 8 - N/A - 32 - 16 - 32 - 768 - 32 - 12 - 0 * - Radeon RX 9060 XT - RDNA4 - gfx1200 - 16 - 32 - 32 or 64 - 128 - 32 - 4 - N/A - 32 - 16 - 32 - 768 - 32 - 12 - 0 * - Radeon RX 9060 - RDNA4 - gfx1200 - 8 - 28 - 32 or 64 - 128 - 32 - 4 - N/A - 32 - 16 - 32 - 768 - 32 - 12 - 0 * - Radeon RX 7900 XTX - RDNA3 - gfx1100 - 24 - 96 - 32 or 64 - 128 - 96 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon RX 7900 XT - RDNA3 - gfx1100 - 20 - 84 - 32 or 64 - 128 - 80 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon RX 7900 GRE - RDNA3 - gfx1100 - 16 - 80 - 32 or 64 - 128 - 64 - 6 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon RX 7800 XT - RDNA3 - gfx1101 - 16 - 60 - 32 or 64 - 128 - 64 - 4 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon RX 7700 XT - RDNA3 - gfx1101 - 12 - 54 - 32 or 64 - 128 - 48 - 4 - 256 - 32 - 16 - 32 - 768 - 32 - 11 - 0 * - Radeon RX 7600 - RDNA3 - gfx1102 - 8 - 32 - 32 or 64 - 128 - 32 - 2 - 256 - 32 - 16 - 32 - 512 - 32 - 11 - 0 * - Radeon RX 6950 XT - RDNA2 - gfx1030 - 16 - 80 - 32 or 64 - 128 - 128 - 4 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6900 XT - RDNA2 - gfx1030 - 16 - 80 - 32 or 64 - 128 - 128 - 4 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6800 XT - RDNA2 - gfx1030 - 16 - 72 - 32 or 64 - 128 - 128 - 4 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6800 - RDNA2 - gfx1030 - 16 - 60 - 32 or 64 - 128 - 128 - 4 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6750 XT - RDNA2 - gfx1031 - 12 - 40 - 32 or 64 - 128 - 96 - 3 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6700 XT - RDNA2 - gfx1031 - 12 - 40 - 32 or 64 - 128 - 96 - 3 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6700 - RDNA2 - gfx1031 - 10 - 36 - 32 or 64 - 128 - 80 - 3 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6650 XT - RDNA2 - gfx1032 - 8 - 32 - 32 or 64 - 128 - 32 - 2 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6600 XT - RDNA2 - gfx1032 - 8 - 32 - 32 or 64 - 128 - 32 - 2 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon RX 6600 - RDNA2 - gfx1032 - 8 - 28 - 32 or 64 - 128 - 32 - 2 - 128 - 16 - 16 - 32 - 512 - 32 - 10 - 3 * - Radeon VII - GCN5.1 - gfx906 - 16 - 60 - 64 - 64 per CU - - 4 - - 16 - 16 per 3 CUs - 32 per 3 CUs - 256 - 12.5 - 9 - 0 Glossary ======== For more information about the terms used, see the :ref:`specific documents and guides `, or :doc:`Understanding the HIP programming model`. **LLVM target name** Argument to pass to clang in ``--offload-arch`` to compile code for the given architecture. **VRAM** Amount of memory available on the GPU. **Compute Units** Number of compute units on the GPU. **Wavefront Size** Amount of work items that execute in parallel on a single compute unit. This is equivalent to the warp size in HIP. **LDS** The Local Data Share (LDS) is a low-latency, high-bandwidth scratch pad memory. It is local to the compute units, and can be shared by all work items in a work group. In HIP, the LDS can be used for shared memory, which is shared by all threads in a block. **L3 Cache (CDNA/GCN only)** Size of the level 3 cache. Shared by all compute units on the same GPU. Caches data and instructions. Similar to the Infinity Cache on RDNA architectures. **Infinity Cache (RDNA only)** Size of the infinity cache. Shared by all compute units on the same GPU. Caches data and instructions. Similar to the L3 Cache on CDNA/GCN architectures. **L2 Cache** Size of the level 2 cache. Shared by all compute units on the same GCD. Caches data and instructions. **Graphics L1 Cache (RDNA only)** An additional cache level that only exists in RDNA architectures. Local to a shader array. **L1 Vector Cache (CDNA/GCN only)** Size of the level 1 vector data cache. Local to a compute unit. This is the L0 vector cache in RDNA architectures. **L1 Scalar Cache (CDNA/GCN only)** Size of the level 1 scalar data cache. Usually shared by several compute units. This is the L0 scalar cache in RDNA architectures. **L1 Instruction Cache (CDNA/GCN only)** Size of the level 1 instruction cache. Usually shared by several compute units. This is the L0 instruction cache in RDNA architectures. **L0 Vector Cache (RDNA only)** Size of the level 0 vector data cache. Local to a compute unit. This is the L1 vector cache in CDNA/GCN architectures. **L0 Scalar Cache (RDNA only)** Size of the level 0 scalar data cache. Usually shared by several compute units. This is the L1 scalar cache in CDNA/GCN architectures. **L0 Instruction Cache (RDNA only)** Size of the level 0 instruction cache. Usually shared by several compute units. This is the L1 instruction cache in CDNA/GCN architectures. **VGPR File** Size of the Vector General Purpose Register (VGPR) file and. It holds data used in vector instructions. GPUs with matrix cores also have AccVGPRs, which are Accumulation General Purpose Vector Registers, used specifically in matrix instructions. **SGPR File** Size of the Scalar General Purpose Register (SGPR) file. Holds data used in scalar instructions. **GFXIP** GFXIP (Graphics IP) is a versioning system used by AMD to identify the GPU architecture and its instruction set. It helps categorize different generations of GPUs and their feature sets. **GFXIP major version** Defines the GPU's core instruction set and architecture, which determines compatibility with software stacks such as HIP and OpenCL. For example, a GFXIP 11 major version corresponds to the RDNA 3 (Navi 3x) architecture, influencing driver support and available compute features. **GFXIP minor version** Represents specific variations within a GFXIP major version and affects feature sets, optimizations, and driver behavior in software stacks such as HIP and OpenCL. Different GPU models within the same major version can have unique capabilities, impacting performance and supported instructions. **GCD** Graphics Compute Die. **XCD** Accelerator Complex Die. --- .. meta:: :description: AMD Instinct GPU, AMD Radeon PRO, and AMD Radeon GPU atomics operations information :keywords: Atomics operations, atomic bitwise functions, atomics add, atomics subtraction, atomics exchange, atomics min, atomics max .. _hw_atomics_operation_support: Hardware atomics operation support ================================================================================ :ref:`Atomic operations ` guarantee that the operation is completed as an indivisible unit, preventing race conditions where simultaneous access to the same memory location could lead to incorrect or undefined behavior. This topic summarizes the support of atomic read-modify-write (atomicRMW) operations on AMD GPUs. This includes gfx9, gfx10, gfx11, and gfx12 targets and the following Instinct™ Series: - MI100 - MI200 - MI300 - MI350 The atomics operation type behavior is affected by the memory locations, memory granularity, and scope of operations. Memory locations: - :ref:`Device memory `, that is, VRAM, the RAM on a discrete GPU device or in framebuffer carveout for APUs. This includes peer-device memory within an Infinity Fabric™ hive. - :ref:`Host memory `: in DRAM associated with the CPU (or peer device memory using PCIe® (PCI Express) peer-to-peer). This can be two sub-types: - Migratable memory: memory that is currently residing in host DRAM, but which can be migrated back to device memory. For example, ``hipMallocManaged()`` or :ref:`unified memory ` allocations. - :ref:`Pinned memory `: memory that is in host memory and cannot be migrated to the device (not necessarily pinned to a particular physical address, but can't be moved to device memory). ``hipHostMalloc()``, for example. Memory granularity or :ref:`coherence `: - Coarse-grained memory - This memory can be used for device-scope synchronization during the execution of a single GPU kernel. Any system-scope atomics sent to this type of memory will not achieve system-scope coherency and will instead be downgraded to device-scope as per the programming model. - This type of memory only available on AMD GPUs. - Fine-grained memory - This memory can be used for device and system-scope synchronization during the execution of a single GPU kernel. Scopes of operations: - Device-scope or agent-scope - This atomic should happen atomically from the point of view of every thread within the device that the atomic-executing thread is in. - System-scope - This atomic should happen atomically from the point of view of every thread in all devices and in the CPUs. Support summary ================================================================================ AMD Instinct GPUs -------------------------------------------------------------------------------- **MI300 and MI350 Series** - All atomicRMW operations are forwarded out to the Infinity Fabric. - Infinity Fabric supports common integer and bitwise atomics, FP32 atomic add, packed-FP16 atomic add, packed-BF16 atomic add, and FP64 add, min, and max. - In discrete GPUs (dGPUs), if the data is stored in host memory, the atomic will be forwarded from the Infinity Fabric to PCIe. - If the PCIe bus does not support the requested atomic, the GPU's PCIe controller changes it into a load-op-store sequence. All waves on the chip submitting atomics to that address will stall waiting for the load-op-store. It will seem like atomics to the wave, but the CPU sees it as a non-atomic load-op-store sequence. This downgrades system-scope atomics to device-scope. **MI200 Series** - L2 cache and Infinity Fabric both support common integer and bitwise atomics. - L2 cache supports FP32 atomic add, packed-FP16 atomic add, and FP64 add, min, and max. - The Infinity Fabric does not support FP32 atomic add, packed-FP16 atomic add, and FP64 add, min, and max atomics and these commands cannot be sent to the Infinity Fabric. - Coarse-grained memory is marked as cacheable, and atomic operations will be processed in the L2 cache. - Fine-grained memory is marked write-uncacheable through the page tables. - Atomics that hit write-uncached memory are forwarded to the Infinity Fabric. - If the uncached data is stored in host memory on a PCIe system, the atomic will be forwarded from Infinity Fabric to PCIe. Any atomic not supported by the PCIe bus will be a NOP and give incorrect result. - If the uncached data is stored in host memory on an A+A system (system with AMD CPU and AMD GPU connected via Infinity Fabric), the atomic operation will be forwarded to the remote location and will succeed if supported by Infinity Fabric. - If the float atomics access write-uncached memory, they cannot be forwarded to the Infinity Fabric, resulting in a NOP and an incorrect outcome. **MI100** - L2 cache and Infinity Fabric both support common integer and bitwise atomics. - L2 cache supports no returns (NoReturn) versions of packed-FP16 and FP32 atomic adds, that cannot return data. - The Infinity Fabric does not support packed-FP16 or FP32 atomic adds, preventing these commands from being transmitted through it. - Coarse-grained memory is marked as cacheable, and atomic operations will be processed in the L2 cache. - Fine-grained memory is marked uncacheable through the page tables. - Atomics that hit uncached memory are forwarded to the Infinity Fabric. - If the uncached data is stored in host memory, the atomic will be forwarded from Infinity Fabric to PCIe. Any atomic not supported by the PCIe bus will be a NOP and give incorrect result. - If an float atomic add hits uncached memory, it cannot be forwarded to the Infinity Fabric so it will NOP and give incorrect result. AMD gfx generic targets -------------------------------------------------------------------------------- **gfx9** - L2 cache and Infinity Fabric both support common integer and bitwise atomics. - Coarse-grained memory is marked as cacheable, and atomic operations will be processed in the L2 cache. - Fine-grained memory is marked uncacheable through the page tables. - Atomics that hit uncached memory are forwarded to the Infinity Fabric. - In a dGPU: if the uncached data is stored in host memory, the atomic will be forwarded from Infinity Fabric to PCIe. Any atomic not supported by the PCIe bus will be a NOP and. **gfx10** - L2 cache and Infinity Fabric both support common integer and bitwise atomics. - Coarse-grained memory is marked as cacheable, and atomic operations will be processed in the L2 cache. - Fine-grained memory is marked uncacheable through the page tables. - Atomics that hit uncached memory are forwarded to the Infinity Fabric. - In a dGPU: if the uncached data is stored in host memory, the atomic will be forwarded from Infinity Fabric to PCIe. Any atomic not supported by the PCIe bus will be a NOP and give incorrect result. - Supports floating-point atomic min/max. - The Infinity Fabric does not support floating-point atomic min/max atomics and these commands cannot be sent to the Infinity Fabric. - If the floating-point atomics hit uncached memory, they cannot be forwarded to the Infinity Fabric, so they will NOP and give incorrect result. **gfx11** - L2 cache and Infinity Fabric both support common integer and bitwise atomics. - L2 cache supports FP32 atomic add, min and max. - The Infinity Fabric does not support FP32 atomic add, min and max atomics and these commands cannot be sent to the Infinity Fabric. - Coarse-grained memory is marked as cacheable, and atomic operations will be processed in the L2 cache. - Fine-grained memory is marked uncacheable through the page tables. - Atomics that hit write-uncached memory are forwarded to the Infinity Fabric. - In a dGPU: if the uncached data is stored in host memory, the atomic will be forwarded from Infinity Fabric to PCIe. Any atomic not supported by the PCIe bus will be a NOP and give incorrect result. - If the float atomics hit uncached memory, they cannot be forwarded to the Infinity Fabric, so they will NOP and give incorrect result. **gfx12** - L2 cache and Infinity Fabric both support common integer and bitwise atomics. - L2 cache and Infinity Fabric both also support FP32 atomic add, min and max, and packed-FP16 atomic add, and packed-BF16 atomic add. - Coarse-grained memory is marked as cacheable, and atomic operations will be processed in the L2 cache. - Fine-grained device memory is marked uncacheable through the page tables. - Atomics that hit write-uncached memory are forwarded to the Infinity Fabric. - Fine-grained system memory is marked as cacheable through the page tables. - Device-scope atomic operations will process in the L2 cache. - System-scope atomic operations will bypass the L2 cache and be forwarded to the Infinity Fabric. - Atomics that hit write-uncached memory are forwarded to the Infinity Fabric. - In dGPUs, if the data is stored in host memory, the atomic will be forwarded from the Infinity Fabric to PCIe. - If the PCIe bus does not support the requested atomic, the GPU's PCIe controller changes it into a load-op-store sequence. All waves on the chip submitting atomics to that address will stall waiting for the load-op-store. It will seem like atomics to the wave, but the CPU sees it as a non-atomic load-op-store sequence. This downgrades system-scope atomics to device-scope. GPUs atomics support ================================================================================ This section presents a series of tables that show the level of atomic operations support for the different hardware devices described above, and different datatypes, different operations and different scopes. Hardware atomics support refers to the ability of GPUs to natively perform atomic operations—special low-level operations that ensure data consistency when multiple threads access and modify memory concurrently. CAS (Compare-and-Swap) atomic support refers to the hardware or software capability to perform an atomic Compare-and-Swap operation. PCIe atomics are a feature of the PCIe interface that enable atomic operations between devices and hosts across the PCIe bus. For further information, please check `How ROCm uses PCIe atomics `_. The tables that follow show the correctness of atomics operations on the hardware using the following notations: - ✅: Produces the correct answer. - ⚠️: Produces the correct answer, but works only at a weaker scope. - ❌: The atomics operation fails. The tables show the different types of atomic operations used by specific devices: - Native: Computes the correct result using a hardware-native atomic instruction. - CAS: Generates the correct result, but the atomic operation is implemented by the compiler for this ISA using a compare-and-swap emulation loop. - ✅ NoReturn: Produces the correct correct result but does not precisely conform to the atomic API. - Scope Downgrade: Generates the correct result but operates at a weaker scope than requested. For example, if a user specifies a system-scope atomic, the operation may only function at the device scope. - NOP: The atomic operation is not executed on the target location, and the requesting thread receives back 0 as a return value. - n/a: The atomic type is not supported and cannot be executed on the specific hardware. The tables selectors or options are the following: - Highest level option: - "HW atomics", where software attempts to use hardware atomics. - "CAS emulation", where software attempts to use CAS emulation. - Second-level option: - "No PCIe atomics" means the system does not support PCIe atomics between the GPU and peer/host-memory. - "PCIe atomics" means the system supports PCIe atomics between the GPU and peer/host-memory. - The third-level option is the memory granularity of the memory target. - The final option is the scope of atomic access. Integer atomics operations -------------------------------------------------------------------------------- The integer type atomic operations that are supported by different hardware. - 32 bit integer - Add - Subtract - Min - Max - IncDec - 64 bit integer - Add - Min - Max AMD Instinct GPUs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The integer type atomic operations that are supported by different AMD Instinct GPUs listed in the following table. .. .. The relative path not working in datatemplate, that's why we also need the absolute path of docs folder. .. datatemplate:nodata:: {% set ns = namespace(offset=2, previous_csv='') %} .. tab-set:: {% for (atomics_type_text, atomics_type_key) in config.html_context['atomics_type'] %} .. tab-item:: {{ atomics_type_text }} :sync: {{ atomics_type_key }} .. tab-set:: {% for (pcie_type_text, pcie_type_key) in config.html_context['pcie_type'] %} .. tab-item:: {{ pcie_type_text }} :sync: {{ pcie_type_key }} .. tab-set:: {% for (memory_type_text, memory_type_key) in config.html_context['memory_type'] %} .. tab-item:: {{ memory_type_text }} :sync: {{ memory_type_key }} .. tab-set:: {% for (granularity_type_text, granularity_type_key) in config.html_context['granularity_type'] %} .. tab-item:: {{ granularity_type_text }} :sync: {{ granularity_type_key }} .. tab-set:: {% for (scope_type_text, scope_type_key) in config.html_context['scope_type'] %} .. tab-item:: {{ scope_type_text }} :sync: {{ scope_type_key }} {# Build the CSV file path for this branch #} {% set current_csv = "data/reference/gpu-atomics-operation/" ~ atomics_type_key ~ "_" ~ pcie_type_key ~ "_instinct.csv" %} {# If we have a new CSV file, reset the offset #} {% if current_csv != ns.previous_csv %} {% set ns.offset = 2 %} {% endif %} {% set ns.previous_csv = current_csv %} {# Compute the row numbers for this leaf #} {% set start = ns.offset %} {% set end = ns.offset + 8 %} .. csv-to-list-table:: :file: {{ current_csv }} :rows: {{ start }}-{{ end }} {# Update the offset: block (9 rows) plus gap (18 rows) #} {% set ns.offset = ns.offset + 9 + 18 %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} .. AMD gfx generic targets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The integer type atomic operations that are supported by different gfx generic targets listed in the following table. .. .. The relative path not working in datatemplate, that's why we also need the absolute path of docs folder. .. datatemplate:nodata:: {% set ns = namespace(offset=2, previous_csv='') %} .. tab-set:: {% for (atomics_type_text, atomics_type_key) in config.html_context['atomics_type'] %} .. tab-item:: {{ atomics_type_text }} :sync: {{ atomics_type_key }} .. tab-set:: {% for (pcie_type_text, pcie_type_key) in config.html_context['pcie_type'] %} .. tab-item:: {{ pcie_type_text }} :sync: {{ pcie_type_key }} .. tab-set:: {% for (memory_type_text, memory_type_key) in config.html_context['memory_type'] %} .. tab-item:: {{ memory_type_text }} :sync: {{ memory_type_key }} .. tab-set:: {% for (granularity_type_text, granularity_type_key) in config.html_context['granularity_type'] %} .. tab-item:: {{ granularity_type_text }} :sync: {{ granularity_type_key }} .. tab-set:: {% for (scope_type_text, scope_type_key) in config.html_context['scope_type'] %} .. tab-item:: {{ scope_type_text }} :sync: {{ scope_type_key }} {# Build the CSV file path for this branch #} {% set current_csv = "data/reference/gpu-atomics-operation/" ~ atomics_type_key ~ "_" ~ pcie_type_key ~ "_gfx.csv" %} {# If we switch CSV files, reset the offset to 2 (to skip the header row) #} {% if current_csv != ns.previous_csv %} {% set ns.offset = 2 %} {% endif %} {% set ns.previous_csv = current_csv %} {# Determine the increment based on atomics_type_key #} {% if atomics_type_key == "hw-atomics" %} {% set increment = 20 %} {% elif atomics_type_key == "cas-atomics" %} {% set increment = 18 %} {% endif %} {# Compute start and end rows (end is inclusive) #} {% set start = ns.offset %} {% set end = ns.offset + 8 %} .. csv-to-list-table:: :file: {{ current_csv }} :rows: {{ start }}-{{ end }} {# Update the offset for the next table in this CSV #} {% set ns.offset = ns.offset + 9 + increment %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} .. Bitwise atomics operations -------------------------------------------------------------------------------- The bitwise atomic operations that are supported by different hardware. - 32 bit bitwise - Exchange - Compare-and-Swap (CAS) - AND - OR - XOR - 64 bit bitwise - Exchange - CAS - AND - OR - XOR .. note:: 128-bit bitwise Exchange and CAS are not supported on AMD GPUs AMD Instinct GPUs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The bitwise atomic operations that are supported by different AMD Instinct GPUs listed in the following table. .. .. The relative path not working in datatemplate, that's why we also need the absolute path of docs folder. .. datatemplate:nodata:: {% set ns = namespace(offset=19, previous_csv='') %} .. tab-set:: {% for (atomics_type_text, atomics_type_key) in config.html_context['atomics_type'] %} .. tab-item:: {{ atomics_type_text }} :sync: {{ atomics_type_key }} .. tab-set:: {% for (pcie_type_text, pcie_type_key) in config.html_context['pcie_type'] %} .. tab-item:: {{ pcie_type_text }} :sync: {{ pcie_type_key }} .. tab-set:: {% for (memory_type_text, memory_type_key) in config.html_context['memory_type'] %} .. tab-item:: {{ memory_type_text }} :sync: {{ memory_type_key }} .. tab-set:: {% for (granularity_type_text, granularity_type_key) in config.html_context['granularity_type'] %} .. tab-item:: {{ granularity_type_text }} :sync: {{ granularity_type_key }} .. tab-set:: {% for (scope_type_text, scope_type_key) in config.html_context['scope_type'] %} .. tab-item:: {{ scope_type_text }} :sync: {{ scope_type_key }} {# Build the CSV file path for this branch #} {% set current_csv = "data/reference/gpu-atomics-operation/" ~ atomics_type_key ~ "_" ~ pcie_type_key ~ "_instinct.csv" %} {# If we have a new CSV file, reset the offset #} {% if current_csv != ns.previous_csv %} {% set ns.offset = 19 %} {% endif %} {% set ns.previous_csv = current_csv %} {# Compute the row numbers for this leaf #} {% set start = ns.offset %} {% set end = ns.offset + 9 %} .. csv-to-list-table:: :file: {{ current_csv }} :rows: {{ start }}-{{ end }} {# Update the offset: block (10 rows) plus gap (17 rows) #} {% set ns.offset = ns.offset + 10 + 17 %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} .. AMD gfx generic targets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The bitwise atomic operations that are supported by different AMD gfx generic targets listed in the following table. .. .. The relative path not working in datatemplate, that's why we also need the absolute path of docs folder. .. datatemplate:nodata:: {% set ns = namespace(offset=19, previous_csv='') %} .. tab-set:: {% for (atomics_type_text, atomics_type_key) in config.html_context['atomics_type'] %} .. tab-item:: {{ atomics_type_text }} :sync: {{ atomics_type_key }} .. tab-set:: {% for (pcie_type_text, pcie_type_key) in config.html_context['pcie_type'] %} .. tab-item:: {{ pcie_type_text }} :sync: {{ pcie_type_key }} .. tab-set:: {% for (memory_type_text, memory_type_key) in config.html_context['memory_type'] %} .. tab-item:: {{ memory_type_text }} :sync: {{ memory_type_key }} .. tab-set:: {% for (granularity_type_text, granularity_type_key) in config.html_context['granularity_type'] %} .. tab-item:: {{ granularity_type_text }} :sync: {{ granularity_type_key }} .. tab-set:: {% for (scope_type_text, scope_type_key) in config.html_context['scope_type'] %} .. tab-item:: {{ scope_type_text }} :sync: {{ scope_type_key }} {# Build the CSV file path for this branch #} {% set current_csv = "data/reference/gpu-atomics-operation/" ~ atomics_type_key ~ "_" ~ pcie_type_key ~ "_gfx.csv" %} {# If we switch CSV files, reset the offset to 2 (to skip the header row) #} {% if current_csv != ns.previous_csv %} {% set ns.offset = 19 %} {% endif %} {% set ns.previous_csv = current_csv %} {# Determine the increment based on atomics_type_key #} {% if atomics_type_key == "hw-atomics" %} {% set increment = 19 %} {% elif atomics_type_key == "cas-atomics" %} {% set increment = 17 %} {% endif %} {# Compute start and end rows (end is inclusive) #} {% set start = ns.offset %} {% set end = ns.offset + 9 %} .. csv-to-list-table:: :file: {{ current_csv }} :rows: {{ start }}-{{ end }} {# Update the offset for the next table in this CSV #} {% set ns.offset = ns.offset + 10 + increment %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} .. Float atomics operations -------------------------------------------------------------------------------- The float types atomic operations that are supported by different hardware. - 32-bit IEEE 754 floating point ('single precision', FP32) - Add - Min - Max - 64-bit IEEE 754 floating point ('double precision', FP64) - Add - Min - Max - 16-bit IEEE 754 floating point ('half precision", FP16) - Add - 2xPacked 16-bit IEEE 754 floating point ('half precision', FP16) - Add - BrainFloat-16 floating point (BF16) - Add - 2xPacked BrainFloat-16 floating point (BF16) - Add AMD Instinct GPUs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The float type atomic operations that are supported by different AMD Instinct GPUs listed in the following table. .. .. The relative path not working in datatemplate, that's why we also need the absolute path of docs folder. .. datatemplate:nodata:: {% set ns = namespace(offset=11, previous_csv='') %} .. tab-set:: {% for (atomics_type_text, atomics_type_key) in config.html_context['atomics_type'] %} .. tab-item:: {{ atomics_type_text }} :sync: {{ atomics_type_key }} .. tab-set:: {% for (pcie_type_text, pcie_type_key) in config.html_context['pcie_type'] %} .. tab-item:: {{ pcie_type_text }} :sync: {{ pcie_type_key }} .. tab-set:: {% for (memory_type_text, memory_type_key) in config.html_context['memory_type'] %} .. tab-item:: {{ memory_type_text }} :sync: {{ memory_type_key }} .. tab-set:: {% for (granularity_type_text, granularity_type_key) in config.html_context['granularity_type'] %} .. tab-item:: {{ granularity_type_text }} :sync: {{ granularity_type_key }} .. tab-set:: {% for (scope_type_text, scope_type_key) in config.html_context['scope_type'] %} .. tab-item:: {{ scope_type_text }} :sync: {{ scope_type_key }} {# Build the CSV file path for this branch #} {% set current_csv = "data/reference/gpu-atomics-operation/" ~ atomics_type_key ~ "_" ~ pcie_type_key ~ "_instinct.csv" %} {# If we have a new CSV file, reset the offset #} {% if current_csv != ns.previous_csv %} {% set ns.offset = 11 %} {% endif %} {% set ns.previous_csv = current_csv %} {# Compute the row numbers for this leaf #} {% set start = ns.offset %} {% set end = ns.offset + 7 %} .. csv-to-list-table:: :file: {{ current_csv }} :rows: {{ start }}-{{ end }} {# Update the offset: block (8 rows) plus gap (19 rows) #} {% set ns.offset = ns.offset + 8 + 19 %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} .. AMD gfx generic targets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The float types atomic operations that are supported by different AMD gfx generic targets listed in the following table. .. .. The relative path not working in datatemplate, that's why we also need the absolute path of docs folder. .. datatemplate:nodata:: {% set ns = namespace(offset=11, previous_csv='') %} .. tab-set:: {% for (atomics_type_text, atomics_type_key) in config.html_context['atomics_type'] %} .. tab-item:: {{ atomics_type_text }} :sync: {{ atomics_type_key }} .. tab-set:: {% for (pcie_type_text, pcie_type_key) in config.html_context['pcie_type'] %} .. tab-item:: {{ pcie_type_text }} :sync: {{ pcie_type_key }} .. tab-set:: {% for (memory_type_text, memory_type_key) in config.html_context['memory_type'] %} .. tab-item:: {{ memory_type_text }} :sync: {{ memory_type_key }} .. tab-set:: {% for (granularity_type_text, granularity_type_key) in config.html_context['granularity_type'] %} .. tab-item:: {{ granularity_type_text }} :sync: {{ granularity_type_key }} .. tab-set:: {% for (scope_type_text, scope_type_key) in config.html_context['scope_type'] %} .. tab-item:: {{ scope_type_text }} :sync: {{ scope_type_key }} {# Build the CSV file path for this branch #} {% set current_csv = "data/reference/gpu-atomics-operation/" ~ atomics_type_key ~ "_" ~ pcie_type_key ~ "_gfx.csv" %} {# If we switch CSV files, reset the offset to 2 (to skip the header row) #} {% if current_csv != ns.previous_csv %} {% set ns.offset = 11 %} {% endif %} {% set ns.previous_csv = current_csv %} {# Determine the increment based on atomics_type_key #} {% if atomics_type_key == "hw-atomics" %} {% set increment = 21 %} {% elif atomics_type_key == "cas-atomics" %} {% set increment = 19 %} {% endif %} {# Compute start and end rows (end is inclusive) #} {% set start = ns.offset %} {% set end = ns.offset + 7 %} .. csv-to-list-table:: :file: {{ current_csv }} :rows: {{ start }}-{{ end }} {# Update the offset for the next table in this CSV #} {% set ns.offset = ns.offset + 8 + increment %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} {% endfor %} .. --- .. meta:: :description: This page lists supported graph safe ROCm libraries. :keywords: AMD, ROCm, HIP, hipGRAPH ******************************************************************************** Graph-safe support for ROCm libraries ******************************************************************************** HIP graph-safe libraries operate safely in HIP execution graphs. :ref:`hip:how_to_HIP_graph` are an alternative way of executing tasks on a GPU that can provide performance benefits over launching kernels using the standard method via streams. Functions and routines from graph-safe libraries shouldn’t result in issues like race conditions, deadlocks, or unintended dependencies. The following table shows whether a ROCm library is graph-safe. .. list-table:: :header-rows: 1 * - ROCm library - Graph safe support * - `Composable Kernel `_ - ❌ * - `hipBLAS `_ - ✅ * - `hipBLASLt `_ - ⚠️ * - `hipCUB `_ - ✅ * - `hipFFT `_ - ✅ (see :ref:`details `) * - `hipRAND `_ - ✅ * - `hipSOLVER `_ - ⚠️ (experimental) * - `hipSPARSE `_ - ✅ * - `hipSPARSELt `_ - ⚠️ (experimental) * - `hipTensor `_ - ❌ * - `MIOpen `_ - ❌ * - `RCCL `_ - ✅ * - `rocAL `_ - ❌ * - `rocALUTION `_ - ❌ * - `rocBLAS `_ - ✅ (see :doc:`details `) * - `rocDecode `_ - ❌ * - `rocFFT `_ - ✅ (see :ref:`details `) * - `rocHPCG `_ - ❌ * - `rocJPEG `_ - ❌ * - `rocPRIM `_ - ✅ * - `rocRAND `_ - ✅ * - `rocSOLVER `_ - ⚠️ (experimental) * - `rocSPARSE `_ - ⚠️ (experimental) * - `rocThrust `_ - ❌ * - `rocWMMA `_ - ❌ * - `RPP `_ - ⚠️ * - `Tensile `_ - ✅ ✅: full support ⚠️: partial support ❌: not supported --- .. meta:: :description: Supported data types of AMD GPUs and libraries in ROCm. :keywords: precision, data types, HIP types, int8, float8, float8 (E4M3), float8 (E5M2), bfloat8, float16, half, bfloat16, tensorfloat32, float, float32, float64, double, AMD data types, HIP data types, ROCm precision, ROCm data types ************************************************************* Data types and precision support ************************************************************* This topic summarizes the data types supported on AMD GPUs and ROCm libraries, along with corresponding :doc:`HIP ` data types. Integral types ============== The signed and unsigned integral types supported by ROCm are listed in the following table. .. list-table:: :header-rows: 1 :widths: 15,35,50 * - Type name - HIP type - Description * - int8 - ``int8_t``, ``uint8_t`` - A signed or unsigned 8-bit integer * - int16 - ``int16_t``, ``uint16_t`` - A signed or unsigned 16-bit integer * - int32 - ``int32_t``, ``uint32_t`` - A signed or unsigned 32-bit integer * - int64 - ``int64_t``, ``uint64_t`` - A signed or unsigned 64-bit integer .. _precision_support_floating_point_types: Floating-point types ==================== The floating-point types supported by ROCm are listed in the following table. .. image:: ../data/about/compatibility/floating-point-data-types.png :alt: Supported floating-point types .. list-table:: :header-rows: 1 :widths: 15,25,60 * - Type name - HIP type - Description * - float4 (E2M1) - | ``__hip_fp4_e2m1`` - A 4-bit floating-point number with **E2M1** bit layout, as described in :doc:`low precision floating point types page `. * - float6 (E3M2) - | ``__hip_fp6_e3m2`` - A 6-bit floating-point number with **E3M2** bit layout, as described in :doc:`low precision floating point types page `. * - float6 (E2M3) - | ``__hip_fp6_e2m3`` - A 6-bit floating-point number with **E2M3** bit layout, as described in :doc:`low precision floating point types page `. * - float8 (E4M3) - | ``__hip_fp8_e4m3_fnuz``, | ``__hip_fp8_e4m3`` - An 8-bit floating-point number with **E4M3** bit layout, as described in :doc:`low precision floating point types page `. The FNUZ variant has expanded range with no infinity or signed zero (NaN represented as negative zero), while the OCP variant follows the Open Compute Project specification. * - float8 (E5M2) - | ``__hip_fp8_e5m2_fnuz``, | ``__hip_fp8_e5m2`` - An 8-bit floating-point number with **E5M2** bit layout, as described in :doc:`low precision floating point types page `. The FNUZ variant has expanded range with no infinity or signed zero (NaN represented as negative zero), while the OCP variant follows the Open Compute Project specification. * - float16 - ``half`` - A 16-bit floating-point number that conforms to the IEEE 754-2008 half-precision storage format. * - bfloat16 - ``bfloat16`` - A shortened 16-bit version of the IEEE 754 single-precision storage format. * - tensorfloat32 - Not available - A floating-point number that occupies 32 bits or less of storage, providing improved range compared to half (16-bit) format, at (potentially) greater throughput than single-precision (32-bit) formats. * - float32 - ``float`` - A 32-bit floating-point number that conforms to the IEEE 754 single-precision storage format. * - float64 - ``double`` - A 64-bit floating-point number that conforms to the IEEE 754 double-precision storage format. .. note:: * The float8 and tensorfloat32 types are internal types used in calculations in Matrix Cores and can be stored in any type of the same size. * CDNA3 natively supports FP8 FNUZ (E4M3 and E5M2), which differs from the customized FP8 format used with NVIDIA H100 (`FP8 Formats for Deep Learning `_). * In some AMD documents and articles, float8 (E5M2) is referred to as bfloat8. * The :doc:`low precision floating point types page ` describes how to use these types in HIP with examples. Level of support definitions ============================ In the following sections, icons represent the level of support. These icons, described in the following table, are also used in the library data type support pages. .. list-table:: :header-rows: 1 * - Icon - Definition * - NA - Not applicable * - ❌ - Not supported * - ⚠️ - Partial support * - ✅ - Full support .. note:: * Full support means that the type is supported natively or with hardware emulation. * Native support means that the operations for that type are implemented in hardware. Types that are not natively supported are emulated with the available hardware. The performance of non-natively supported types can differ from the full instruction throughput rate. For example, 16-bit integer operations can be performed on the 32-bit integer ALUs at full rate; however, 64-bit integer operations might need several instructions on the 32-bit integer ALUs. * Any type can be emulated by software, but this page does not cover such cases. Data type support by hardware architecture ========================================== AMD's GPU lineup spans multiple architecture generations: * CDNA1 such as MI100 * CDNA2 such as MI210, MI250, and MI250X * CDNA3 such as MI300A, MI300X, and MI325X * CDNA4 such as MI350X and MI355X * RDNA2 such as PRO W6800 and PRO V620 * RDNA3 such as RX 7900XT and RX 7900XTX * RDNA4 such as RX 9070 and RX 9070XT HIP C++ type implementation support ----------------------------------- The HIP C++ types available on different hardware platforms are listed in the following table. .. list-table:: :header-rows: 1 * - HIP C++ Type - CDNA1 - CDNA2 - CDNA3 - CDNA4 - RDNA2 - RDNA3 - RDNA4 * - ``int8_t``, ``uint8_t`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``int16_t``, ``uint16_t`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``int32_t``, ``uint32_t`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``int64_t``, ``uint64_t`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``__hip_fp4_e2m1`` - ❌ - ❌ - ❌ - ✅ - ❌ - ❌ - ❌ * - ``__hip_fp6_e2m3`` - ❌ - ❌ - ❌ - ✅ - ❌ - ❌ - ❌ * - ``__hip_fp6_e3m2`` - ❌ - ❌ - ❌ - ✅ - ❌ - ❌ - ❌ * - ``__hip_fp8_e4m3_fnuz`` - ❌ - ❌ - ✅ - ❌ - ❌ - ❌ - ❌ * - ``__hip_fp8_e5m2_fnuz`` - ❌ - ❌ - ✅ - ❌ - ❌ - ❌ - ❌ * - ``__hip_fp8_e4m3`` - ❌ - ❌ - ❌ - ✅ - ❌ - ❌ - ✅ * - ``__hip_fp8_e5m2`` - ❌ - ❌ - ❌ - ✅ - ❌ - ❌ - ✅ * - ``half`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``bfloat16`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``float`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ * - ``double`` - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ - ✅ .. note:: Library support for specific data types is contingent upon hardware support. Even if a ROCm library indicates support for a particular data type, that type will only be fully functional if the underlying hardware architecture (as shown in the table above) also supports it. For example, fp8 types are only available on architectures shown with a checkmark in the relevant rows. Compute units support --------------------- The following table lists data type support for compute units. .. tab-set:: .. tab-item:: Integral types :sync: integral-type .. list-table:: :header-rows: 1 * - Type name - int8 - int16 - int32 - int64 * - CDNA1 - ✅ - ✅ - ✅ - ✅ * - CDNA2 - ✅ - ✅ - ✅ - ✅ * - CDNA3 - ✅ - ✅ - ✅ - ✅ * - CDNA4 - ✅ - ✅ - ✅ - ✅ * - RDNA2 - ✅ - ✅ - ✅ - ✅ * - RDNA3 - ✅ - ✅ - ✅ - ✅ * - RDNA4 - ✅ - ✅ - ✅ - ✅ .. tab-item:: Low precision floating-point types :sync: floating-point-type-low .. list-table:: :header-rows: 1 * - Type name - float4 - float6 (E2M3) - float6 (E3M2) - float8 (E4M3) - float8 (E5M2) * - CDNA1 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA2 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA3 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA4 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA2 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA3 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA4 - ❌ - ❌ - ❌ - ❌ - ❌ .. tab-item:: High precision floating-point types :sync: floating-point-type-high .. list-table:: :header-rows: 1 * - Type name - float16 - bfloat16 - tensorfloat32 - float32 - float64 * - CDNA1 - ✅ - ✅ - ❌ - ✅ - ✅ * - CDNA2 - ✅ - ✅ - ❌ - ✅ - ✅ * - CDNA3 - ✅ - ✅ - ❌ - ✅ - ✅ * - CDNA4 - ✅ - ✅ - ❌ - ✅ - ✅ * - RDNA2 - ✅ - ✅ - ❌ - ✅ - ✅ * - RDNA3 - ✅ - ✅ - ❌ - ✅ - ✅ * - RDNA4 - ✅ - ✅ - ❌ - ✅ - ✅ Matrix core support ------------------- The following table lists data type support for AMD GPU matrix cores. .. tab-set:: .. tab-item:: Integral types :sync: integral-type .. list-table:: :header-rows: 1 * - Type name - int8 - int16 - int32 - int64 * - CDNA1 - ✅ - ❌ - ❌ - ❌ * - CDNA2 - ✅ - ❌ - ❌ - ❌ * - CDNA3 - ✅ - ❌ - ❌ - ❌ * - CDNA4 - ✅ - ❌ - ❌ - ❌ * - RDNA2 - ✅ - ❌ - ❌ - ❌ * - RDNA3 - ✅ - ❌ - ❌ - ❌ * - RDNA4 - ✅ - ❌ - ❌ - ❌ .. tab-item:: Low precision floating-point types :sync: floating-point-type-low .. list-table:: :header-rows: 1 * - Type name - float4 - float6 (E2M3) - float6 (E3M2) - float8 (E4M3) - float8 (E5M2) * - CDNA1 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA2 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA3 - ❌ - ❌ - ❌ - ✅ - ✅ * - CDNA4 - ✅ - ✅ - ✅ - ✅ - ✅ * - RDNA2 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA3 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA4 - ❌ - ❌ - ❌ - ✅ - ✅ .. tab-item:: High precision floating-point types :sync: floating-point-type-high .. list-table:: :header-rows: 1 * - Type name - float16 - bfloat16 - tensorfloat32 - float32 - float64 * - CDNA1 - ✅ - ✅ - ❌ - ✅ - ❌ * - CDNA2 - ✅ - ✅ - ❌ - ✅ - ✅ * - CDNA3 - ✅ - ✅ - ✅ - ✅ - ✅ * - CDNA4 - ✅ - ✅ - ✅ - ✅ - ✅ * - RDNA2 - ✅ - ✅ - ❌ - ❌ - ❌ * - RDNA3 - ✅ - ✅ - ❌ - ❌ - ❌ * - RDNA4 - ✅ - ✅ - ❌ - ❌ - ❌ Atomic operations support ------------------------- The following table lists which data types are supported for atomic operations on AMD GPUs. The atomics operation type behavior is affected by the memory locations, memory granularity, or scope of operations. For detailed various support of atomic read-modify-write (atomicRMW) operations collected on the :ref:`Hardware atomics operation support ` page. .. tab-set:: .. tab-item:: Integral types :sync: integral-type .. list-table:: :header-rows: 1 * - Type name - int8 - int16 - int32 - int64 * - CDNA1 - ❌ - ❌ - ✅ - ✅ * - CDNA2 - ❌ - ❌ - ✅ - ✅ * - CDNA3 - ❌ - ❌ - ✅ - ✅ * - RDNA3 - ❌ - ❌ - ✅ - ✅ * - RDNA4 - ❌ - ❌ - ✅ - ✅ .. tab-item:: Low precision floating-point types :sync: floating-point-type-low .. list-table:: :header-rows: 1 * - Type name - float4 - float6 (E2M3) - float6 (E3M2) - float8 (E4M3) - float8 (E5M2) * - CDNA1 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA2 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA3 - ❌ - ❌ - ❌ - ❌ - ❌ * - CDNA4 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA2 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA3 - ❌ - ❌ - ❌ - ❌ - ❌ * - RDNA4 - ❌ - ❌ - ❌ - ❌ - ❌ .. tab-item:: High precision floating-point types :sync: floating-point-type-high .. list-table:: :header-rows: 1 * - Type name - 2 x float16 - 2 x bfloat16 - tensorfloat32 - float32 - float64 * - CDNA1 - ✅ - ✅ - ❌ - ✅ - ❌ * - CDNA2 - ✅ - ✅ - ❌ - ✅ - ✅ * - CDNA3 - ✅ - ✅ - ❌ - ✅ - ✅ * - CDNA4 - ✅ - ✅ - ❌ - ✅ - ✅ * - RDNA2 - ❌ - ❌ - ❌ - ✅ - ❌ * - RDNA3 - ❌ - ❌ - ❌ - ✅ - ❌ * - RDNA4 - ✅ - ✅ - ❌ - ✅ - ❌ .. note:: You can emulate atomic operations using software for cases that are not natively supported. Software-emulated atomic operations have a high negative performance impact when they frequently access the same memory address. Data type support in ROCm libraries =================================== ROCm library support for int8, float8 (E4M3), float8 (E5M2), int16, float16, bfloat16, int32, tensorfloat32, float32, int64, and float64 is listed in the following tables. Libraries input/output type support ----------------------------------- The following tables list ROCm library support for specific input and output data types. Select a library from the below table to view the supported data types. .. datatemplate:yaml:: /data/reference/precision-support/precision-support.yaml {% set library_groups = data.library_groups %} .. raw:: html
Category
{% for group in library_groups %}
{{ group.group }}
{% endfor %}
Library
{% for group in library_groups %} {% for library in group.libraries %}
{{ library.name }}
{% endfor %} {% endfor %}
{% for group in library_groups %} {% for library in group.libraries %} .. container:: model-doc {{ library.tag }} For more information, please visit :doc:`{{ library.name }} <{{ library.doc_link }}>`. .. list-table:: :header-rows: 1 :widths: 70, 30 * - Data Type - Support {% for data_type in library.data_types %} * - {{ data_type.type }} - {{ data_type.support }} {% endfor %} {% endfor %} {% endfor %} .. note:: The meaning of partial support depends on the library. Please refer to the individual libraries' documentation for more information. .. note:: As random number generation libraries, rocRAND and hipRAND only specify output data types for the random values they generate, with no need for input data types. .. note:: hipBLASLt supports additional data types as internal compute types, which may differ from the supported input/output types shown in the tables above. While TensorFloat32 is not supported as an input or output type in this library, it is available as an internal compute type. For complete details on supported compute types, refer to the :doc:`hipBLASLt ` documentation. hipDataType enumeration ----------------------- The ``hipDataType`` enumeration defines data precision types and is primarily used when the data reference itself does not include type information, such as in ``void*`` pointers. This enumeration is mainly utilized in BLAS libraries. The HIP type equivalents of the ``hipDataType`` enumeration are listed in the following table with descriptions and values. .. list-table:: :header-rows: 1 :widths: 25,25,10,40 * - hipDataType - HIP type - Value - Description * - ``HIP_R_8I`` - ``int8_t`` - 3 - 8-bit real signed integer. * - ``HIP_R_8U`` - ``uint8_t`` - 8 - 8-bit real unsigned integer. * - ``HIP_R_16I`` - ``int16_t`` - 20 - 16-bit real signed integer. * - ``HIP_R_16U`` - ``uint16_t`` - 22 - 16-bit real unsigned integer. * - ``HIP_R_32I`` - ``int32_t`` - 10 - 32-bit real signed integer. * - ``HIP_R_32U`` - ``uint32_t`` - 12 - 32-bit real unsigned integer. * - ``HIP_R_32F`` - ``float`` - 0 - 32-bit real single precision floating-point. * - ``HIP_R_64F`` - ``double`` - 1 - 64-bit real double precision floating-point. * - ``HIP_R_16F`` - ``half`` - 2 - 16-bit real half precision floating-point. * - ``HIP_R_16BF`` - ``bfloat16`` - 14 - 16-bit real bfloat16 precision floating-point. * - ``HIP_R_8F_E4M3`` - ``__hip_fp8_e4m3`` - 28 - 8-bit real float8 precision floating-point (OCP version). * - ``HIP_R_8F_E5M2`` - ``__hip_fp8_e5m2`` - 29 - 8-bit real bfloat8 precision floating-point (OCP version). * - ``HIP_R_6F_E2M3`` - ``__hip_fp6_e2m3`` - 31 - 6-bit real float6 precision floating-point. * - ``HIP_R_6F_E3M2`` - ``__hip_fp6_e3m2`` - 32 - 6-bit real bfloat6 precision floating-point. * - ``HIP_R_4F_E2M1`` - ``__hip_fp4_e2m1`` - 33 - 4-bit real float4 precision floating-point. * - ``HIP_R_8F_E4M3_FNUZ`` - ``__hip_fp8_e4m3_fnuz`` - 1000 - 8-bit real float8 precision floating-point (FNUZ version). * - ``HIP_R_8F_E5M2_FNUZ`` - ``__hip_fp8_e5m2_fnuz`` - 1001 - 8-bit real bfloat8 precision floating-point (FNUZ version). The full list of the ``hipDataType`` enumeration listed in `library_types.h `_. --- .. meta:: :description: What is ROCm :keywords: ROCm components, ROCm projects, introduction, ROCm, AMD, runtimes, compilers, tools, libraries, API *********************************************************** What is ROCm? *********************************************************** ROCm is a software stack, composed primarily of open-source software, that provides the tools for programming AMD Graphics Processing Units (GPUs), from low-level kernels to high-level end-user applications. .. image:: data/rocm-software-stack-7_0_0.jpg :width: 800 :alt: AMD's ROCm software stack and enabling technologies. :align: center Specifically, ROCm provides the tools for :doc:`HIP (Heterogeneous-computing Interface for Portability) `, OpenCL and OpenMP. These include compilers, libraries for high-level functions, debuggers, profilers and runtimes. ROCm components =============================================== ROCm consists of the following components. For information on the license associated with each component, see :doc:`ROCm licensing <./about/license>`. Libraries ----------------------------------------------- Machine Learning & Computer Vision ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" ":doc:`Composable Kernel `", "Provides a programming model for writing performance critical kernels for machine learning workloads across multiple architectures" ":doc:`MIGraphX `", "Graph inference engine that accelerates machine learning model inference" ":doc:`MIOpen `", "An open source deep-learning library" ":doc:`MIVisionX `", "Set of comprehensive computer vision and machine learning libraries, utilities, and applications" ":doc:`ROCm Performance Primitives (RPP) `", "Comprehensive high-performance computer vision library for AMD processors with HIP/OpenCL/CPU back-ends" ":doc:`rocAL `", "An augmentation library designed to decode and process images and videos" ":doc:`rocDecode `", "High-performance SDK for access to video decoding features on AMD GPUs" ":doc:`rocJPEG `", "Library for decoding JPG images on AMD GPUs" ":doc:`rocPyDecode `", "Provides access to rocDecode APIs in both Python and C/C++ languages" .. note:: `rocCV `_ is an efficient GPU-accelerated library for image pre- and post-processing. rocCV is in an early access state. Using it on production workloads is not recommended. Communication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" ":doc:`RCCL `", "Standalone library that provides multi-GPU and multi-node collective communication primitives" ":doc:`rocSHMEM `", "An intra-kernel networking library that provides GPU-centric networking through an OpenSHMEM-like interface" Math ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" "`half `_", "C++ header-only library that provides an IEEE 754 conformant, 16-bit half-precision floating-point type, along with corresponding arithmetic operators, type conversions, and common mathematical functions" ":doc:`hipBLAS `", "BLAS-marshaling library that supports :doc:`rocBLAS ` and cuBLAS backends" ":doc:`hipBLASLt `", "Provides general matrix-matrix operations with a flexible API and extends functionalities beyond traditional BLAS library" ":doc:`hipFFT `", "Fast Fourier transforms (FFT)-marshalling library that supports rocFFT or cuFFT backends" ":doc:`hipfort `", "Fortran interface library for accessing GPU Kernels" ":doc:`hipRAND `", "Ports CUDA applications that use the cuRAND library into the HIP layer" ":doc:`hipSOLVER `", "An LAPACK-marshalling library that supports :doc:`rocSOLVER ` and cuSOLVER backends" ":doc:`hipSPARSE `", "SPARSE-marshalling library that supports :doc:`rocSPARSE ` and cuSPARSE backends" ":doc:`hipSPARSELt `", "SPARSE-marshalling library with multiple supported backends" ":doc:`rocALUTION `", "Sparse linear algebra library for exploring fine-grained parallelism on ROCm runtime and toolchains" ":doc:`rocBLAS `", "BLAS implementation (in the HIP programming language) on the ROCm runtime and toolchains" ":doc:`rocFFT `", "Software library for computing fast Fourier transforms (FFTs) written in HIP" ":doc:`rocRAND `", "Provides functions that generate pseudorandom and quasirandom numbers" ":doc:`rocSOLVER `", "An implementation of LAPACK routines on ROCm software, implemented in the HIP programming language and optimized for AMD's latest discrete GPUs" ":doc:`rocSPARSE `", "Exposes a common interface that provides BLAS for sparse computation implemented on ROCm runtime and toolchains (in the HIP programming language)" ":doc:`rocWMMA `", "C++ library for accelerating mixed-precision matrix multiply-accumulate (MMA) operations" ":doc:`Tensile `", "Creates benchmark-driven backend libraries for GEMMs, GEMM-like problems, and general N-dimensional tensor contractions" Primitives ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" ":doc:`hipCUB `", "Thin header-only wrapper library on top of :doc:`rocPRIM ` or CUB that allows project porting using the CUB library to the HIP layer" ":doc:`hipTensor `", "AMD's C++ library for accelerating tensor primitives based on the composable kernel library" ":doc:`rocPRIM `", "Header-only library for HIP parallel primitives" ":doc:`rocThrust `", "Parallel algorithm library" Tools ----------------------------------------------- System Management ^^^^^^^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" ":doc:`AMD SMI `", "System management interface to control AMD GPU settings, monitor performance, and retrieve device and process information" ":doc:`ROCm Data Center Tool `", "Simplifies administration and addresses key infrastructure challenges in AMD GPUs in cluster and data-center environments" ":doc:`rocminfo `", "Reports system information" ":doc:`ROCm SMI `", "C library for Linux that provides a user space interface for applications to monitor and control GPU applications" ":doc:`ROCm Validation Suite `", "Detects and troubleshoots common problems affecting AMD GPUs running in a high-performance computing environment" Performance ^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" ":doc:`ROCm Bandwidth Test `", "Captures the performance characteristics of buffer copying and kernel read/write operations" ":doc:`ROCm Compute Profiler `", "Kernel-level profiling for machine learning and high performance computing (HPC) workloads" ":doc:`ROCm Systems Profiler `", "Comprehensive profiling and tracing of applications running on the CPU or the CPU and GPU" ":doc:`ROCProfiler `", "Profiling tool for HIP applications" ":doc:`ROCprofiler-SDK `", "Toolkit for developing analysis tools for profiling and tracing GPU compute applications. This toolkit is in beta and subject to change" ":doc:`ROCTracer `", "Intercepts runtime API calls and traces asynchronous activity" .. note:: `ROCprof Compute Viewer `_ is a tool for visualizing and analyzing GPU thread trace data collected using :doc:`rocprofv3 `. Note that `ROCprof Compute Viewer `_ is in an early access state. Running production workloads is not recommended. Development ^^^^^^^^^^^ .. csv-table:: :header: "Component", "Description" ":doc:`HIPIFY `", "Translates CUDA source code into portable HIP C++" ":doc:`ROCm CMake `", "Collection of CMake modules for common build and development tasks" ":doc:`ROCdbgapi `", "ROCm debugger API library" ":doc:`ROCm Debugger (ROCgdb) `", "Source-level debugger for Linux, based on the GNU Debugger (GDB)" ":doc:`ROCr Debug Agent `", "Prints the state of all AMD GPU wavefronts that caused a queue error by sending a SIGQUIT signal to the process while the program is running" Compilers ----------------------------------------------- .. csv-table:: :header: "Component", "Description" ":doc:`HIPCC `", "Compiler driver utility that calls Clang or NVCC and passes the appropriate include and library options for the target compiler and HIP infrastructure" ":doc:`ROCm compilers `", "ROCm LLVM compiler infrastructure" "`FLANG `_", "An out-of-tree Fortran compiler targeting LLVM" Runtimes ----------------------------------------------- .. csv-table:: :header: "Component", "Description" ":doc:`AMD Compute Language Runtime (CLR) `", "Contains source code for AMD's compute language runtimes: HIP and OpenCL" ":doc:`HIP `", "AMD's GPU programming language extension and the GPU runtime" ":doc:`ROCR-Runtime `", "User-mode API interfaces and libraries necessary for host applications to launch compute kernels on available HSA ROCm kernel agents" --- # ROCm license ```{include} ../../LICENSE ``` :::{note} The preceding license applies to the [ROCm repository](https://github.com/ROCm/ROCm), which primarily contains documentation. For licenses related to other ROCm components, refer to the following section. ::: ## ROCm component licenses ROCm is released by Advanced Micro Devices, Inc. (AMD) and is licensed per component separately. The following table is a list of ROCm components with links to their respective license terms. These components may include third party components subject to additional licenses. Please review individual repositories for more information. | Component | License | |:---------------------|:-------------------------| | [AMD Compute Language Runtime (CLR)](https://github.com/ROCm/rocm-systems/tree/develop/projects/clr) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/clr/LICENSE.md) | | [AMD SMI](https://github.com/ROCm/amdsmi) | [MIT](https://github.com/ROCm/amdsmi/blob/amd-staging/LICENSE) | | [aomp](https://github.com/ROCm/aomp/) | [Apache 2.0](https://github.com/ROCm/aomp/blob/aomp-dev/LICENSE) | | [aomp-extras](https://github.com/ROCm/aomp-extras/) | [MIT](https://github.com/ROCm/aomp-extras/blob/aomp-dev/LICENSE) | | [AQLprofile](https://github.com/ROCm/rocm-systems/tree/develop/projects/aqlprofile/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/aqlprofile/LICENSE.md) | | [Code Object Manager (Comgr)](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/comgr) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/comgr/LICENSE.txt) | | [Composable Kernel](https://github.com/ROCm/composable_kernel) | [MIT](https://github.com/ROCm/composable_kernel/blob/develop/LICENSE) | | [half](https://github.com/ROCm/half/) | [MIT](https://github.com/ROCm/half/blob/rocm/LICENSE.txt) | | [HIP](https://github.com/ROCm/rocm-systems/tree/develop/projects/hip/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/hip/LICENSE.md) | | [hipamd](https://github.com/ROCm/rocm-systems/tree/develop/projects/clr/hipamd/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/clr/hipamd/LICENSE.md) | | [hipBLAS](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipblas/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipblas/LICENSE.md) | | [hipBLASLt](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipblaslt/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipblaslt/LICENSE.md) | | [HIPCC](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/hipcc) | [MIT](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/hipcc/LICENSE.txt) | | [hipCUB](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipcub/) | [Custom](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipcub/LICENSE.txt) | | [hipFFT](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipfft/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipfft/LICENSE.md) | | [hipfort](https://github.com/ROCm/hipfort/) | [MIT](https://github.com/ROCm/hipfort/blob/develop/LICENSE) | | [HIPIFY](https://github.com/ROCm/HIPIFY/) | [MIT](https://github.com/ROCm/HIPIFY/blob/amd-staging/LICENSE.txt) | | [hipRAND](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hiprand/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hiprand/LICENSE.md) | | [hipSOLVER](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipsolver/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipsolver/LICENSE.md) | | [hipSPARSE](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipsparse/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipsparse/LICENSE.md) | | [hipSPARSELt](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipsparselt/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipsparselt/LICENSE.md) | | [hipTensor](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hiptensor/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hiptensor/LICENSE) | | [llvm-project](https://github.com/ROCm/llvm-project/) | [Apache](https://github.com/ROCm/llvm-project/blob/amd-staging/LICENSE.TXT) | | [llvm-project/flang](https://github.com/ROCm/llvm-project/tree/amd-staging/flang) | [Apache 2.0](https://github.com/ROCm/llvm-project/blob/amd-staging/flang/LICENSE.TXT) | | [MIGraphX](https://github.com/ROCm/AMDMIGraphX/) | [MIT](https://github.com/ROCm/AMDMIGraphX/blob/develop/LICENSE) | | [MIOpen](https://github.com/ROCm/rocm-libraries/tree/develop/projects/miopen/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/miopen/LICENSE.md) | | [MIVisionX](https://github.com/ROCm/MIVisionX/) | [MIT](https://github.com/ROCm/MIVisionX/blob/develop/LICENSE.txt) | | [rocAL](https://github.com/ROCm/rocAL) | [MIT](https://github.com/ROCm/rocAL/blob/develop/LICENSE.txt) | | [rocALUTION](https://github.com/ROCm/rocALUTION/) | [MIT](https://github.com/ROCm/rocALUTION/blob/develop/LICENSE.md) | | [rocBLAS](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocblas/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocblas/LICENSE.md) | | [ROCdbgapi](https://github.com/ROCm/ROCdbgapi/) | [MIT](https://github.com/ROCm/ROCdbgapi/blob/amd-staging/LICENSE.txt) | | [rocDecode](https://github.com/ROCm/rocDecode) | [MIT](https://github.com/ROCm/rocDecode/blob/develop/LICENSE) | | [rocFFT](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocfft/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocfft/LICENSE.md) | | [ROCgdb](https://github.com/ROCm/ROCgdb/) | [GNU General Public License v3.0](https://github.com/ROCm/ROCgdb/blob/amd-staging/COPYING3) | | [rocJPEG](https://github.com/ROCm/rocJPEG/) | [MIT](https://github.com/ROCm/rocJPEG/blob/develop/LICENSE) | | [ROCK-Kernel-Driver](https://github.com/ROCm/ROCK-Kernel-Driver/) | [GPL 2.0 WITH Linux-syscall-note](https://github.com/ROCm/ROCK-Kernel-Driver/blob/master/COPYING) | | [rocminfo](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocminfo/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocminfo/License.txt) | | [ROCm Bandwidth Test](https://github.com/ROCm/rocm_bandwidth_test/) | [MIT](https://github.com/ROCm/rocm_bandwidth_test/blob/master/LICENSE.txt) | | [ROCm CMake](https://github.com/ROCm/rocm-cmake/) | [MIT](https://github.com/ROCm/rocm-cmake/blob/develop/LICENSE) | | [ROCm Communication Collectives Library (RCCL)](https://github.com/ROCm/rccl/) | [Custom](https://github.com/ROCm/rccl/blob/develop/LICENSE.txt) | | [ROCm-Core](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocm-core/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocm-core/LICENSE.md) | | [ROCm Compute Profiler](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler-compute/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler-compute/LICENSE.md) | | [ROCm Data Center (RDC)](https://github.com/ROCm/rocm-systems/tree/develop/projects/rdc/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rdc/LICENSE.md) | | [ROCm-Device-Libs](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/device-libs) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/device-libs/LICENSE.TXT) | | [ROCm-OpenCL-Runtime](https://github.com/ROCm/rocm-systems/tree/develop/projects/clr/opencl/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/clr/opencl/LICENSE.md) | | [ROCm Performance Primitives (RPP)](https://github.com/ROCm/rpp) | [MIT](https://github.com/ROCm/rpp/blob/develop/LICENSE) | | [ROCm SMI Lib](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocm-smi-lib/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocm-smi-lib/LICENSE.md) | | [ROCm Systems Profiler](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler-systems/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler-systems/LICENSE.md) | | [ROCm Validation Suite](https://github.com/ROCm/ROCmValidationSuite/) | [MIT](https://github.com/ROCm/ROCmValidationSuite/blob/master/LICENSE) | | [rocPRIM](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocprim/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocprim/LICENSE.md) | | [ROCProfiler](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler/LICENSE.md) | | [ROCprofiler-SDK](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler-sdk/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler-sdk/LICENSE.md) | | [rocPyDecode](https://github.com/ROCm/rocPyDecode) | [MIT](https://github.com/ROCm/rocPyDecode/blob/develop/LICENSE.txt) | | [rocRAND](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocrand/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocrand/LICENSE.md) | | [ROCr Debug Agent](https://github.com/ROCm/rocr_debug_agent/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocr_debug_agent/blob/amd-staging/LICENSE.txt) | | [ROCR-Runtime](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocr-runtime/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocr-runtime/LICENSE.txt) | | [rocSHMEM](https://github.com/ROCm/rocSHMEM/) | [MIT](https://github.com/ROCm/rocSHMEM/blob/develop/LICENSE.md) | | [rocSOLVER](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocsolver/) | [BSD-2-Clause](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocsolver/LICENSE.md) | | [rocSPARSE](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocsparse/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocsparse/LICENSE.md) | | [rocThrust](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocthrust/) | [Apache 2.0](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocthrust/LICENSE) | | [ROCTracer](https://github.com/ROCm/rocm-systems/tree/develop/projects/roctracer/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/roctracer/LICENSE.md) | | [rocWMMA](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocwmma/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocwmma/LICENSE.md) | | [Tensile](https://github.com/ROCm/rocm-libraries/tree/develop/shared/tensile/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/shared/tensile/LICENSE.md) | | [TransferBench](https://github.com/ROCm/TransferBench) | [MIT](https://github.com/ROCm/TransferBench/blob/develop/LICENSE.md) | Open sourced ROCm components are released via public GitHub repositories, packages on [https://repo.radeon.com](https://repo.radeon.com) and other distribution channels. Proprietary products are only available on [https://repo.radeon.com](https://repo.radeon.com). Proprietary components are organized in a proprietary subdirectory in the package repositories to distinguish from open sourced packages. ```{note} The following additional terms and conditions apply to your use of ROCm technical documentation. ``` ©2023 - 2025 Advanced Micro Devices, Inc. All rights reserved. The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions, and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. Any computer system has risks of security vulnerabilities that cannot be completely prevented or mitigated. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes. THIS INFORMATION IS PROVIDED “AS IS.” AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS, OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY RELIANCE, DIRECT, INDIRECT, SPECIAL, OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AMD, the AMD Arrow logo, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies. ### Package licensing :::{attention} ROCprof Trace Decoder and AOCC CPU optimizations are provided in binary form, subject to the license agreement enclosed on [GitHub](https://github.com/ROCm/rocprof-trace-decoder/blob/amd-mainline/LICENSE) for ROCprof Trace Decoder, and [Developer Central](https://www.amd.com/en/developer/aocc.html) for AOCC. By using, installing, copying or distributing ROCprof Trace Decoder or AOCC CPU Optimizations, you agree to the terms and conditions of this license agreement. If you do not agree to the terms of this agreement, do not install, copy or use ROCprof Trace Decoder or the AOCC CPU Optimizations. ::: For the rest of the ROCm packages, you can find the licensing information at the following location: `/opt/rocm/share/doc//` or in the locations specified in the preceding table. For example, you can fetch the licensing information of the `amd_comgr` component (Code Object Manager) from the `/opt/rocm/share/doc/amd_comgr/LICENSE.txt` file. --- # Deep learning: Inception V3 with PyTorch ## Deep learning training Deep-learning models are designed to capture the complexity of the problem and the underlying data. These models are "deep," comprising multiple component layers. Training is finding the best parameters for each model layer to achieve a well-defined objective. The training data consists of input features in supervised learning, similar to what the learned model is expected to see during the evaluation or inference phase. The target output is also included, which serves to teach the model. A loss metric is defined as part of training that evaluates the model's performance during the training process. Training also includes the choice of an optimization algorithm that reduces the loss by adjusting the model's parameters. Training is an iterative process where training data is fed in, usually split into different batches, with the entirety of the training data passed during one training epoch. Training usually is run for multiple epochs. ## Training phases Training occurs in multiple phases for every batch of training data. the following table provides an explanation of the types of training phases. :::{table} Types of Training Phases :name: training-phases :widths: auto | Types of Phases | | | ----------------- | --- | | Forward Pass | The input features are fed into the model, whose parameters may be randomly initialized initially. Activations (outputs) of each layer are retained during this pass to help in the loss gradient computation during the backward pass. | | Loss Computation | The output is compared against the target outputs, and the loss is computed. | | Backward Pass | The loss is propagated backward, and the model's error gradients are computed and stored for each trainable parameter. | | Optimization Pass | The optimization algorithm updates the model parameters using the stored error gradients. | ::: Training is different from inference, particularly from the hardware perspective. The following table shows the contrast between training and inference. :::{table} Training vs. Inference :name: training-inference :widths: auto | Training | Inference | | ----------- | ----------- | | Training is measured in hours/days. | The inference is measured in minutes. | | Training is generally run offline in a data center or cloud setting. | The inference is made on edge devices. | | The memory requirements for training are higher than inference due to storing intermediate data, such as activations and error gradients. | The memory requirements are lower for inference than training. | | Data for training is available on the disk before the training process and is generally significant. The training performance is measured by how fast the data batches can be processed. | Inference data usually arrive stochastically, which may be batched to improve performance. Inference performance is generally measured in throughput speed to process the batch of data and the delay in responding to the input (latency). | ::: Different quantization data types are typically chosen between training (FP32, BF16) and inference (FP16, INT8). The computation hardware has different specializations from other data types, leading to improvement in performance if a faster datatype can be selected for the corresponding task. ## Case studies The following sections contain case studies for the Inception V3 model. ### Inception V3 with PyTorch Convolution Neural Networks are forms of artificial neural networks commonly used for image processing. One of the core layers of such a network is the convolutional layer, which convolves the input with a weight tensor and passes the result to the next layer. Inception V3 is an architectural development over the ImageNet competition-winning entry, AlexNet, using more profound and broader networks while attempting to meet computational and memory budgets. The implementation uses PyTorch as a framework. This case study utilizes [TorchVision](https://pytorch.org/vision/stable/index.html), a repository of popular datasets and model architectures, for obtaining the model. TorchVision also provides pre-trained weights as a starting point to develop new models or fine-tune the model for a new task. #### Evaluating a pre-trained model The Inception V3 model introduces a simple image classification task with the pre-trained model. This does not involve training but utilizes an already pre-trained model from TorchVision. This example is adapted from the PyTorch research hub page on [Inception V3](https://pytorch.org/vision/master/models/inception.html). Follow these steps: 1. Run the PyTorch ROCm-based Docker image or refer to the section {doc}`Installing PyTorch ` for setting up a PyTorch environment on ROCm. ```dockerfile docker run -it -v $HOME:/data --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd --device=/dev/dri --group-add video --ipc=host --shm-size 8G rocm/pytorch:latest ``` 2. Run the Python shell and import packages and libraries for model creation. ```py import torch import torchvision ``` 3. Set the model in evaluation mode. Evaluation mode directs PyTorch not to store intermediate data, which would have been used in training. ```py model = torch.hub.load('pytorch/vision:v0.10.0', 'inception_v3', pretrained=True) model.eval() ``` 4. Download a sample image for inference. ```py import urllib url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") try: urllib.URLopener().retrieve(url, filename) except: urllib.request.urlretrieve(url, filename) ``` 5. Import torchvision and PILImage support libraries. ```py from PIL import Image from torchvision import transforms input_image = Image.open(filename) ``` 6. Apply preprocessing and normalization. ```py preprocess = transforms.Compose([ transforms.Resize(299), transforms.CenterCrop(299), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) ``` 7. Use input tensors and unsqueeze them later. ```py input_tensor = preprocess(input_image) input_batch = input_tensor.unsqueeze(0) if torch.cuda.is_available(): input_batch = input_batch.to('cuda') model.to('cuda') ``` 8. Find out probabilities. ```py with torch.no_grad(): output = model(input_batch) print(output[0]) probabilities = torch.nn.functional.softmax(output[0], dim=0) print(probabilities) ``` 9. To understand the probabilities, download and examine the ImageNet labels. ```py wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt ``` 10. Read the categories and show the top categories for the image. ```py with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) ``` #### Training Inception V3 The previous section focused on downloading and using the Inception V3 model for a simple image classification task. This section walks through training the model on a new dataset. Follow these steps: 1. Run the PyTorch ROCm Docker image or refer to the section {doc}`Installing PyTorch ` for setting up a PyTorch environment on ROCm. ```dockerfile docker pull rocm/pytorch:latest docker run -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd --device=/dev/dri --group-add video --ipc=host --shm-size 8G rocm/pytorch:latest ``` 2. Download an ImageNet database. For this example, the `tiny-imagenet-200`, a smaller ImageNet variant with 200 image classes and a training dataset with 100,000 images, was downsized to 64x64 color images. ```bash wget http://cs231n.stanford.edu/tiny-imagenet-200.zip ``` 3. Process the database to set the validation directory to the format expected by PyTorch's `DataLoader`. 4. Run the following script: ```py import io import glob import os from shutil import move from os.path import join from os import listdir, rmdir target_folder = './tiny-imagenet-200/val/' val_dict = {} with open('./tiny-imagenet-200/val/val_annotations.txt', 'r') as f: for line in f.readlines(): split_line = line.split('\t') val_dict[split_line[0]] = split_line[1] paths = glob.glob('./tiny-imagenet-200/val/images/*') for path in paths: file = path.split('/')[-1] folder = val_dict[file] if not os.path.exists(target_folder + str(folder)): os.mkdir(target_folder + str(folder)) os.mkdir(target_folder + str(folder) + '/images') for path in paths: file = path.split('/')[-1] folder = val_dict[file] dest = target_folder + str(folder) + '/images/' + str(file) move(path, dest) rmdir('./tiny-imagenet-200/val/images') ``` 5. Open a Python shell. 6. Import dependencies, including Torch, OS, and [TorchVision](https://github.com/pytorch/vision). ```py import torch import os import torchvision from torchvision import transforms from torchvision.transforms.functional import InterpolationMode ``` 7. Set parameters to guide the training process. :::{note} The device is set to `"cuda"`. In PyTorch, `"cuda"` is a generic keyword to denote a GPU. ::: ```py device = "cuda" ``` 8. Set the data_path to the location of the training and validation data. In this case, the `tiny-imagenet-200` is present as a subdirectory to the current directory. ```py data_path = "tiny-imagenet-200" ``` The training image size is cropped for input into Inception V3. ```py train_crop_size = 299 ``` 9. To smooth the image, use bilinear interpolation, a resampling method that uses the distance weighted average of the four nearest pixel values to estimate a new pixel value. ```py interpolation = "bilinear" ``` The next parameters control the size to which the validation image is cropped and resized. ```py val_crop_size = 299 val_resize_size = 342 ``` The pre-trained Inception V3 model is chosen to be downloaded from torchvision. ```py model_name = "inception_v3" pretrained = True ``` During each training step, a batch of images is processed to compute the loss gradient and perform the optimization. In the following setting, the size of the batch is determined. ```py batch_size = 32 ``` This refers to the number of CPU threads the data loader uses to perform efficient multi-process data loading. ```py num_workers = 16 ``` The `torch.optim` package provides methods to adjust the learning rate as the training progresses. This example uses the `StepLR` scheduler, which decays the learning rate by `lr_gamma` at every `lr_step_size` number of epochs. ```py learning_rate = 0.1 momentum = 0.9 weight_decay = 1e-4 lr_step_size = 30 lr_gamma = 0.1 ``` :::{note} One training epoch is when the neural network passes an entire dataset forward and backward. ::: ```py epochs = 90 ``` The train and validation directories are determined. ```py train_dir = os.path.join(data_path, "train") val_dir = os.path.join(data_path, "val") ``` 10. Set up the training and testing data loaders. ```py interpolation = InterpolationMode(interpolation) TRAIN_TRANSFORM_IMG = transforms.Compose([ Normalizaing and standardardizing the image transforms.RandomResizedCrop(train_crop_size, interpolation=interpolation), transforms.PILToTensor(), transforms.ConvertImageDtype(torch.float), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) dataset = torchvision.datasets.ImageFolder( train_dir, transform=TRAIN_TRANSFORM_IMG ) TEST_TRANSFORM_IMG = transforms.Compose([ transforms.Resize(val_resize_size, interpolation=interpolation), transforms.CenterCrop(val_crop_size), transforms.PILToTensor(), transforms.ConvertImageDtype(torch.float), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) dataset_test = torchvision.datasets.ImageFolder( val_dir, transform=TEST_TRANSFORM_IMG ) print("Creating data loaders") train_sampler = torch.utils.data.RandomSampler(dataset) test_sampler = torch.utils.data.SequentialSampler(dataset_test) data_loader = torch.utils.data.DataLoader( dataset, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers, pin_memory=True ) data_loader_test = torch.utils.data.DataLoader( dataset_test, batch_size=batch_size, sampler=test_sampler, num_workers=num_workers, pin_memory=True ) ``` :::{note} Use torchvision to obtain the Inception V3 model. Use the pre-trained model weights to speed up training. ::: ```py print("Creating model") print("Num classes = ", len(dataset.classes)) model = torchvision.models.__dict__[model_name](pretrained=pretrained) ``` 11. Adapt Inception V3 for the current dataset. `tiny-imagenet-200` contains only 200 classes, whereas Inception V3 is designed for 1,000-class output. The last layer of Inception V3 is replaced to match the output features required. ```py model.fc = torch.nn.Linear(model.fc.in_features, len(dataset.classes)) model.aux_logits = False model.AuxLogits = None ``` 12. Move the model to the GPU device. ```py model.to(device) ``` 13. Set the loss criteria. For this example, Cross Entropy Loss is used. ```py criterion = torch.nn.CrossEntropyLoss() ``` 14. Set the optimizer to Stochastic Gradient Descent. ```py optimizer = torch.optim.SGD( model.parameters(), lr=learning_rate, momentum=momentum, weight_decay=weight_decay ) ``` 15. Set the learning rate scheduler. ```py lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=lr_step_size, gamma=lr_gamma) ``` 16. Iterate over epochs. Each epoch is a complete pass through the training data. ```py print("Start training") for epoch in range(epochs): model.train() epoch_loss = 0 len_dataset = 0 ``` 17. Iterate over steps. The data is processed in batches, and each step passes through a full batch. ```py for step, (image, target) in enumerate(data_loader): ``` 18. Pass the image and target to the GPU device. ```py image, target = image.to(device), target.to(device) ``` The following is the core training logic: a. The image is fed into the model. b. The output is compared with the target in the training data to obtain the loss. c. This loss is back propagated to all parameters that require optimization. d. The optimizer updates the parameters based on the selected optimization algorithm. ```py output = model(image) loss = criterion(output, target) optimizer.zero_grad() loss.backward() optimizer.step() ``` The epoch loss is updated, and the step loss prints. ```py epoch_loss += output.shape[0] * loss.item() len_dataset += output.shape[0]; if step % 10 == 0: print('Epoch: ', epoch, '| step : %d' % step, '| train loss : %0.4f' % loss.item() ) epoch_loss = epoch_loss / len_dataset print('Epoch: ', epoch, '| train loss : %0.4f' % epoch_loss ) ``` The learning rate is updated at the end of each epoch. ```py lr_scheduler.step() ``` After training for the epoch, the model evaluates against the validation dataset. ```py model.eval() with torch.inference_mode(): running_loss = 0 for step, (image, target) in enumerate(data_loader_test): image, target = image.to(device), target.to(device) output = model(image) loss = criterion(output, target) running_loss += loss.item() running_loss = running_loss / len(data_loader_test) print('Epoch: ', epoch, '| test loss : %0.4f' % running_loss ) ``` 19. Save the model for use in inferencing tasks. ```py # save model torch.save(model.state_dict(), "trained_inception_v3.pt") ``` Plotting the train and test loss shows both metrics reducing over training epochs. This is demonstrated in the following image. ![Inception V3 train and loss graph](../data/conceptual/inception-v3.png "Inception V3 train and loss") ### Custom model with CIFAR-10 on PyTorch The Canadian Institute for Advanced Research (CIFAR)-10 dataset is a subset of the Tiny Images dataset (which contains 80 million images of 32x32 collected from the Internet) and consists of 60,000 32x32 color images. The images are labeled with one of 10 mutually exclusive classes: airplane, motor car, bird, cat, deer, dog, frog, cruise ship, stallion, and truck (but not pickup truck). There are 6,000 images per class, with 5,000 training and 1,000 testing images per class. Let us prepare a custom model for classifying these images using the PyTorch framework and go step-by-step as illustrated below. Follow these steps: 1. Import dependencies, including Torch, OS, and [TorchVision](https://github.com/pytorch/vision). ```py import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plot import numpy as np ``` 2. The output of torchvision datasets is `PILImage` images of range [0, 1]. Transform them to Tensors of normalized range [-1, 1]. ```py transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) ``` During each training step, a batch of images is processed to compute the loss gradient and perform the optimization. In the following setting, the size of the batch is determined. ```py batch_size = 4 ``` 3. Download the dataset train and test datasets as follows. Specify the batch size, shuffle the dataset once, and specify the number of workers to the number of CPU threads used by the data loader to perform efficient multi-process data loading. ```py train_set = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=2) ``` 4. Follow the same procedure for the testing set. ```py test_set = TorchVision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=2) print ("teast set and test loader") ``` 5. Specify the defined classes of images belonging to this dataset. ```py classes = ('Aeroplane', 'motorcar', 'bird', 'cat', 'deer', 'puppy', 'frog', 'stallion', 'cruise', 'truck') print("defined classes") ``` 6. Denormalize the images and then iterate over them. ```py global image_number image_number = 0 def show_image(img): global image_number image_number = image_number + 1 img = img / 2 + 0.5 # de-normalizing input image npimg = img.numpy() plot.imshow(np.transpose(npimg, (1, 2, 0))) plot.savefig("fig{}.jpg".format(image_number)) print("fig{}.jpg".format(image_number)) plot.show() data_iter = iter(train_loader) images, labels = data_iter.next() show_image(torchvision.utils.make_grid(images)) print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size))) print("image created and saved ") ``` 7. Import the `torch.nn` for constructing neural networks and `torch.nn.functional` to use the convolution functions. ```py import torch.nn as nn import torch.nn.functional as F ``` 8. Define the CNN (Convolution Neural Networks) and relevant activation functions. ```py class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.pool = nn.MaxPool2d(2, 2) self.conv3 = nn.Conv2d(3, 6, 5) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() print("created Net() ") ``` 9. Set the optimizer to Stochastic Gradient Descent. ```py import torch.optim as optim ``` 10. Set the loss criteria. For this example, Cross Entropy Loss is used. ```py criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ``` 11. Iterate over epochs. Each epoch is a complete pass through the training data. ```py for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(train_loader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ``` ```py PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) print("saved model to path :",PATH) net = Net() net.load_state_dict(torch.load(PATH)) print("loding back saved model") outputs = net(images) _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) correct = 0 total = 0 ``` As this is not training, calculating the gradients for outputs is not required. ```py # calculate outputs by running images through the network with torch.no_grad(): for data in test_loader: images, labels = data # calculate outputs by running images through the network outputs = net(images) # the class with the highest energy is what you can choose as prediction _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) # prepare to count predictions for each class correct_pred = {classname: 0 for classname in classes} total_pred = {classname: 0 for classname in classes} ``` ```py # again no gradients needed with torch.no_grad(): for data in test_loader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # collect the correct predictions for each class for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1 # print accuracy for each class for classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print("Accuracy for class {:5s} is: {:.1f} %".format(classname,accuracy)) ``` ### Case study: TensorFlow with Fashion-MNIST Fashion-MNIST is a dataset that contains 70,000 grayscale images in 10 categories. Implement and train a neural network model using the TensorFlow framework to classify images of clothing, like sneakers and shirts. The dataset has 60,000 images you will use to train the network and 10,000 to evaluate how accurately the network learned to classify images. The Fashion-MNIST dataset can be accessed via TensorFlow internal libraries. Access the source code from the following repository: [https://github.com/ROCm/tensorflow_fashionmnist/blob/main/fashion_mnist.py](https://github.com/ROCm/tensorflow_fashionmnist/blob/main/fashion_mnist.py) To understand the code step by step, follow these steps: 1. Import libraries like TensorFlow, NumPy, and Matplotlib to train the neural network and calculate and plot graphs. ```py import tensorflow as tf import numpy as np import matplotlib.pyplot as plt ``` 2. To verify that TensorFlow is installed, print the version of TensorFlow by using the below print statement: ```py print(tf._version__) r ``` 3. Load the dataset from the available internal libraries to analyze and train a neural network upon the Fashion-MNIST dataset. Loading the dataset returns four NumPy arrays. The model uses the training set arrays, train_images and train_labels, to learn. 4. The model is tested against the test set, test_images, and test_labels arrays. ```py fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() ``` Since you have 10 types of images in the dataset, assign labels from zero to nine. Each image is assigned one label. The images are 28x28 NumPy arrays, with pixel values ranging from zero to 255. 5. Each image is mapped to a single label. Since the class names are not included with the dataset, store them, and later use them when plotting the images: ```py class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat','Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] ``` 6. Use this code to explore the dataset by knowing its dimensions: ```py train_images.shape ``` 7. Use this code to print the size of this training set: ```py print(len(train_labels)) ``` 8. Use this code to print the labels of this training set: ```py print(train_labels) ``` 9. Preprocess the data before training the network, and you can start inspecting the first image, as its pixels will fall in the range of zero to 255. ```py plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() ``` ![ ](../data/conceptual/mnist-1.png) 10. From the above picture, you can see that values are from zero to 255. Before training this on the neural network, you must bring them in the range of zero to one. Hence, divide the values by 255. ```py train_images = train_images / 255.0 test_images = test_images / 255.0 ``` 11. To ensure the data is in the correct format and ready to build and train the network, display the first 25 images from the training set and the class name below each image. ```py plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() ``` ![ ](../data/conceptual/mnist-2.png) The basic building block of a neural network is the layer. Layers extract representations from the data fed into them. Deep learning consists of chaining together simple layers. Most layers, such as `tf.keras.layers.Dense`, have parameters that are learned during training. ```py model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) ``` * The first layer in this network `tf.keras.layers.Flatten` transforms the format of the images from a two-dimensional array (of 28 x 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data. * After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected or fully connected neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer returns a logits array with a length of 10. Each node contains a score that indicates the current image belongs to one of the 10 classes. 12. You must add the Loss function, Metrics, and Optimizer at the time of model compilation. ```py model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ``` * Loss function —This measures how accurate the model is during training when you are looking to minimize this function to "steer" the model in the right direction. * Optimizer —This is how the model is updated based on the data it sees and its loss function. * Metrics —This is used to monitor the training and testing steps. The following example uses accuracy, the fraction of the correctly classified images. To train the neural network model, follow these steps: 1. Feed the training data to the model. The training data is in the train_images and train_labels arrays in this example. The model learns to associate images and labels. 2. Ask the model to make predictions about a test set—in this example, the test_images array. 3. Verify that the predictions match the labels from the test_labels array. 4. To start training, call the model.fit method because it "fits" the model to the training data. ```py model.fit(train_images, train_labels, epochs=10) ``` 5. Compare how the model will perform on the test dataset. ```py test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('\nTest accuracy:', test_acc) ``` 6. With the model trained, you can use it to make predictions about some images: the model's linear outputs and logits. Attach a softmax layer to convert the logits to probabilities, making it easier to interpret. ```py probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()]) predictions = probability_model.predict(test_images) ``` 7. The model has predicted the label for each image in the testing set. Look at the first prediction: ```py predictions[0] ``` A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value: ```py np.argmax(predictions[0]) ``` 8. Plot a graph to look at the complete set of 10 class predictions. ```py def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') ``` 9. With the model trained, you can use it to make predictions about some images. Review the 0th image predictions and the prediction array. Correct prediction labels are blue, and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label. ```py i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions[i], test_labels) plt.show() ``` ![ ](../data/conceptual/mnist-3.png) ```py i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions[i], test_labels) plt.show() ``` ![ ](../data/conceptual/mnist-4.png) 10. Use the trained model to predict a single image. ```py # Grab an image from the test dataset. img = test_images[1] print(img.shape) ``` 11. `tf.keras` models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you are using a single image, you must add it to a list. ```py # Add the image to a batch where it's the only member. img = (np.expand_dims(img,0)) print(img.shape) ``` 12. Predict the correct label for this image. ```py predictions_single = probability_model.predict(img) print(predictions_single) plot_value_array(1, predictions_single[0], test_labels) _ = plt.xticks(range(10), class_names, rotation=45) plt.show() ``` ![ ](../data/conceptual/mnist-5.png) 13. `tf.keras.Model.predict` returns a list of lists—one for each image in the batch of data. Grab the predictions for our (only) image in the batch. ```py np.argmax(predictions_single[0]) ``` ### Case study: TensorFlow with text classification This procedure demonstrates text classification starting from plain text files stored on disk. You will train a binary classifier to perform sentiment analysis on an IMDB dataset. At the end of the notebook, there is an exercise for you to try in which you will train a multi-class classifier to predict the tag for a programming question on Stack Overflow. Follow these steps: 1. Import the necessary libraries. ```py import matplotlib.pyplot as plt import os import re import shutil import string import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras import losses ``` 2. Get the data for the text classification, and extract the database from the given link of IMDB. ```py url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" dataset = tf.keras.utils.get_file("aclImdb_v1", url, untar=True, cache_dir='.', cache_subdir='') ``` ```bash Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz 84131840/84125825 [==============================] – 1s 0us/step 84149932/84125825 [==============================] – 1s 0us/step ``` 3. Fetch the data from the directory. ```py dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') print(os.listdir(dataset_dir)) ``` 4. Load the data for training purposes. ```py train_dir = os.path.join(dataset_dir, 'train') os.listdir(train_dir) ``` ```py ['labeledBow.feat', 'urls_pos.txt', 'urls_unsup.txt', 'unsup', 'pos', 'unsupBow.feat', 'urls_neg.txt', 'neg'] ``` 5. The directories contain many text files, each of which is a single movie review. To look at one of them, use the following: ```py sample_file = os.path.join(train_dir, 'pos/1181_9.txt') with open(sample_file) as f: print(f.read()) ``` 6. As the IMDB dataset contains additional folders, remove them before using this utility. ```py remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir) batch_size = 32 seed = 42 ``` 7. The IMDB dataset has already been divided into train and test but lacks a validation set. Create a validation set using an 80:20 split of the training data by using the validation_split argument below: ```py raw_train_ds=tf.keras.utils.text_dataset_from_directory('aclImdb/train',batch_size=batch_size, validation_split=0.2,subset='training', seed=seed) ``` 8. As you will see in a moment, you can train a model by passing a dataset directly to `model.fit`. If you are new to `tf.data`, you can also iterate over the dataset and print a few examples as follows: ```py for text_batch, label_batch in raw_train_ds.take(1): for i in range(3): print("Review", text_batch.numpy()[i]) print("Label", label_batch.numpy()[i]) ``` 9. The labels are zero or one. To see which of these correspond to positive and negative movie reviews, check the class_names property on the dataset. ```py print("Label 0 corresponds to", raw_train_ds.class_names[0]) print("Label 1 corresponds to", raw_train_ds.class_names[1]) ``` 10. Next, create validation and test the dataset. Use the remaining 5,000 reviews from the training set for validation into two classes of 2,500 reviews each. ```py raw_val_ds = tf.keras.utils.text_dataset_from_directory('aclImdb/train', batch_size=batch_size,validation_split=0.2,subset='validation', seed=seed) raw_test_ds = tf.keras.utils.text_dataset_from_directory( 'aclImdb/test', batch_size=batch_size) ``` To prepare the data for training, follow these steps: 1. Standardize, tokenize, and vectorize the data using the helpful `tf.keras.layers.TextVectorization` layer. ```py def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '
', ' ') return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation),'') ``` 2. Create a `TextVectorization` layer. Use this layer to standardize, tokenize, and vectorize our data. Set the output_mode to int to create unique integer indices for each token. Note that we are using the default split function and the custom standardization function you defined above. You will also define some constants for the model, like an explicit maximum sequence_length, which will cause the layer to pad or truncate sequences to exactly sequence_length values. ```py max_features = 10000 sequence_length = 250 vectorize_layer = layers.TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode='int', output_sequence_length=sequence_length) ``` 3. Call adapt to fit the state of the preprocessing layer to the dataset. This causes the model to build an index of strings to integers. ```py # Make a text-only dataset (without labels), then call adapt train_text = raw_train_ds.map(lambda x, y: x) vectorize_layer.adapt(train_text) ``` 4. Create a function to see the result of using this layer to preprocess some data. ```py def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label text_batch, label_batch = next(iter(raw_train_ds)) first_review, first_label = text_batch[0], label_batch[0] print("Review", first_review) print("Label", raw_train_ds.class_names[first_label]) print("Vectorized review", vectorize_text(first_review, first_label)) ``` ![ ](../data/conceptual/TextClassification-3.png) 5. As you can see above, each token has been replaced by an integer. Look up the token (string) that each integer corresponds to by calling get_vocabulary() on the layer. ```py print("1287 ---> ",vectorize_layer.get_vocabulary()[1287]) print(" 313 ---> ",vectorize_layer.get_vocabulary()[313]) print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary()))) ``` 6. You are nearly ready to train your model. As a final preprocessing step, apply the `TextVectorization` layer we created earlier to train, validate, and test the dataset. ```py train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text) ``` The `cache()` function keeps data in memory after it is loaded off disk. This ensures the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files. The `prefetch()` function overlaps data preprocessing and model execution while training. ```py AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) ``` 7. Create your neural network. ```py embedding_dim = 16 model = tf.keras.Sequential([layers.Embedding(max_features + 1, embedding_dim),layers.Dropout(0.2),layers.GlobalAveragePooling1D(), layers.Dropout(0.2),layers.Dense(1)]) model.summary() ``` ![ ](../data/conceptual/TextClassification-4.png) 8. A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), use [`losses.BinaryCrossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss function. ```py model.compile(loss=losses.BinaryCrossentropy(from_logits=True), optimizer='adam',metrics=tf.metrics.BinaryAccuracy(threshold=0.0)) ``` 9. Train the model by passing the dataset object to the fit method. ```py epochs = 10 history = model.fit(train_ds,validation_data=val_ds,epochs=epochs) ``` ![ ](../data/conceptual/TextClassification-5.png) 10. See how the model performs. Two values are returned: loss (a number representing our error; lower values are better) and accuracy. ```py loss, accuracy = model.evaluate(test_ds) print("Loss: ", loss) print("Accuracy: ", accuracy) ``` :::{note} `model.fit()` returns a History object that contains a dictionary with everything that happened during training. ::: ```py history_dict = history.history history_dict.keys() ``` 11. Four entries are for each monitored metric during training and validation. Use these to plot the training and validation loss for comparison, as well as the training and validation accuracy: ```py acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() ``` The following images illustrate the training and validation loss and the training and validation accuracy. ![Training and validation loss](../data/conceptual/TextClassification-6.png "Training and validation loss") ![Training and validation accuracy](../data/conceptual/TextClassification-7.png "Training and validation accuracy") 12. Export the model. ```py export_model = tf.keras.Sequential([ vectorize_layer, model, layers.Activation('sigmoid') ]) export_model.compile( loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'] ) # Test it with `raw_test_ds`, which yields raw strings loss, accuracy = export_model.evaluate(raw_test_ds) print(accuracy) ``` 13. To get predictions for new examples, call model.predict(). ```py examples = [ "The movie was great!", "The movie was okay.", "The movie was terrible..." ] export_model.predict(examples) ``` --- # Using compiler features The following topics describe using specific features of the compilation tools: * [ROCm compiler infrastructure](https://rocm.docs.amd.com/projects/llvm-project/en/latest/index.html) * [Using AddressSanitizer](https://rocm.docs.amd.com/projects/llvm-project/en/latest/conceptual/using-gpu-sanitizer.html) * [OpenMP support](https://rocm.docs.amd.com/projects/llvm-project/en/latest/conceptual/openmp.html) --- # ROCm Linux Filesystem Hierarchy Standard reorganization ## Introduction The ROCm Software has adopted the Linux Filesystem Hierarchy Standard (FHS) [https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html) in order to to ensure ROCm is consistent with standard open source conventions. The following sections specify how current and future releases of ROCm adhere to FHS, how the previous ROCm file system is supported, and how improved versioning specifications are applied to ROCm. ## Adopting the FHS In order to standardize ROCm directory structure and directory content layout ROCm has adopted the [FHS](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html), adhering to open source conventions for Linux-based distribution. FHS ensures internal consistency within the ROCm stack, as well as external consistency with other systems and distributions. The ROCm proposed file structure is outlined below: ```none /opt/rocm- | -- bin | -- all public binaries | -- lib | -- lib.so->lib.so.major->lib.so.major.minor.patch (public libaries to link with applications) | -- | -- architecture dependent libraries and binaries used internally by components | -- cmake | -- | ---config.cmake | -- libexec | -- | -- non ISA/architecture independent executables used internally by components | -- include | -- | -- public header files | -- share | -- html | -- | -- html documentation | -- info | -- | -- info files | -- man | -- | -- man pages | -- doc | -- | -- license files | -- | -- samples | -- architecture independent misc files ``` ## Changes from earlier ROCm versions The following table provides a brief overview of the new ROCm FHS layout, compared to the layout of earlier ROCm versions. Note that /opt/ is used to denote the default rocm-installation-path and should be replaced in case of a non-standard installation location of the ROCm distribution. ```none ______________________________________________________ | New ROCm Layout | Previous ROCm Layout | |_____________________________|________________________| | /opt/rocm- | /opt/rocm- | | | -- bin | | -- bin | | | -- lib | | -- lib | | | -- cmake | | -- include | | | -- libexec | | -- | | | -- include | | -- bin | | | -- | | -- cmake | | | -- share | | -- doc | | | -- html | | -- lib | | | -- info | | -- include | | | -- man | | -- samples | | | -- doc | | -- | | | -- | | -- bin | | | -- samples | | -- cmake | | | -- .. | | -- doc | | | -- | | -- lib | | | -- samples | | -- include | | | -- .. | | -- samples | |______________________________________________________| ``` ## ROCm FHS reorganization: backward compatibility The FHS file organization for ROCm was first introduced in the release of ROCm 5.2 . Backward compatibility was implemented to make sure users could still run their ROCm applications while transitioning to the new FHS. ROCm has moved header files and libraries to their new locations as indicated in the above structure, and included symbolic-links and wrapper header files in their old location for backward compatibility. The following sections detail ROCm backward compatibility implementation for wrapper header files, executable files, library files and CMake config files. ### Wrapper header files Wrapper header files are placed in the old location ( `/opt/rocm-//include`) with a warning message to include files from the new location (`/opt/rocm-/include`) as shown in the example below. ```cpp #pragma message "This file is deprecated. Use file from include path /opt/rocm-ver/include/ and prefix with hip." #include ``` * Starting at ROCm 5.2 release, the deprecation for backward compatibility wrapper header files is: `#pragma` message announcing `#warning`. * Starting from ROCm 6.0 (tentatively) backward compatibility for wrapper header files will be removed, and the `#pragma` message will be announcing `#error`. ### Executable files Executable files are available in the `/opt/rocm-/bin` folder. For backward compatibility, the old library location (`/opt/rocm-//bin`) has a soft link to the library at the new location. Soft links will be removed in a future release, tentatively ROCm v6.0. ```bash $ ls -l /opt/rocm/hip/bin/ lrwxrwxrwx 1 root root 24 Jan 1 23:32 hipcc -> ../../bin/hipcc ``` ### Library files Library files are available in the `/opt/rocm-/lib` folder. For backward compatibility, the old library location (`/opt/rocm-//lib`) has a soft link to the library at the new location. Soft links will be removed in a future release, tentatively ROCm v6.0. ```shell $ ls -l /opt/rocm/hip/lib/ drwxr-xr-x 4 root root 4096 Jan 1 10:45 cmake lrwxrwxrwx 1 root root 24 Jan 1 23:32 libamdhip64.so -> ../../lib/libamdhip64.so ``` ### CMake config files All CMake configuration files are available in the `/opt/rocm-/lib/cmake/` folder. For backward compatibility, the old CMake locations (`/opt/rocm-//lib/cmake`) consist of a soft link to the new CMake config. Soft links will be removed in a future release, tentatively ROCm v6.0. ```shell $ ls -l /opt/rocm/hip/lib/cmake/hip/ lrwxrwxrwx 1 root root 42 Jan 1 23:32 hip-config.cmake -> ../../../../lib/cmake/hip/hip-config.cmake ``` ## Changes required in applications using ROCm Applications using ROCm are advised to use the new file paths. As the old files will be deprecated in a future release. Applications have to make sure to include correct header file and use correct search paths. 1. `#include` needs to be changed to `#include ` For example: `#include ` needs to change to `#include ` 2. Any variable in CMake or Makefiles pointing to component folder needs to changed. For example: `VAR1=/opt/rocm/hip` needs to be changed to `VAR1=/opt/rocm` `VAR2=/opt/rocm/hsa` needs to be changed to `VAR2=/opt/rocm` 3. Any reference to `/opt/rocm//bin` or `/opt/rocm//lib` needs to be changed to `/opt/rocm/bin` and `/opt/rocm/lib/`, respectively. ## Changes in versioning specifications In order to better manage ROCm dependencies specification and allow smoother releases of ROCm while avoiding dependency conflicts, ROCm software shall adhere to the following scheme when numbering and incrementing ROCm files versions: rocm-\, where \ = \ x.y.z denote: MAJOR.MINOR.PATCH z: PATCH - increment z when implementing backward compatible bug fixes. y: MINOR - increment y when implementing minor changes that add functionality but are still backward compatible. x: MAJOR - increment x when implementing major changes that are not backward compatible. --- --- myst: html_meta: "description lang=en": "Learn about the AMD Instinct MI100 Series architecture." "keywords": "Instinct, MI100, microarchitecture, AMD, ROCm" --- # AMD Instinct™ MI100 microarchitecture The following image shows the node-level architecture of a system that comprises two AMD EPYC™ processors and (up to) eight AMD Instinct™ GPUs. The two EPYC processors are connected to each other with the AMD Infinity™ fabric which provides a high-bandwidth (up to 18 GT/sec) and coherent links such that each processor can access the available node memory as a single shared-memory domain in a non-uniform memory architecture (NUMA) fashion. In a 2P, or dual-socket, configuration, three AMD Infinity™ fabric links are available to connect the processors plus one PCIe Gen 4 x16 link per processor can attach additional I/O devices such as the host adapters for the network fabric. ![Structure of a single GCD in the AMD Instinct MI100 GPU](../../data/conceptual/gpu-arch/image004.png "Node-level system architecture with two AMD EPYC™ processors and eight AMD Instinct™ GPUs.") In a typical node configuration, each processor can host up to four AMD Instinct™ GPUs that are attached using PCIe Gen 4 links at 16 GT/sec, which corresponds to a peak bidirectional link bandwidth of 32 GB/sec. Each hive of four GPUs can participate in a fully connected, coherent AMD Instinct™ fabric that connects the four GPUs using 23 GT/sec AMD Infinity fabric links that run at a higher frequency than the inter-processor links. This inter-GPU link can be established in certified server systems if the GPUs are mounted in neighboring PCIe slots by installing the AMD Infinity Fabric™ bridge for the AMD Instinct™ GPUs. ## Microarchitecture The microarchitecture of the AMD Instinct GPUs is based on the AMD CDNA architecture, which targets compute applications such as high-performance computing (HPC) and AI & machine learning (ML) that run on everything from individual servers to the world's largest exascale supercomputers. The overall system architecture is designed for extreme scalability and compute performance. ![Structure of the AMD Instinct GPU (MI100 generation)](../../data/conceptual/gpu-arch/image005.png "Structure of the AMD Instinct GPU (MI100 generation)") The above image shows the AMD Instinct GPU with its PCIe Gen 4 x16 link (16 GT/sec, at the bottom) that connects the GPU to (one of) the host processor(s). It also shows the three AMD Infinity Fabric ports that provide high-speed links (23 GT/sec, also at the bottom) to the other GPUs of the local hive. On the left and right of the floor plan, the High Bandwidth Memory (HBM) attaches via the GPU memory controller. The MI100 generation of the AMD Instinct GPU offers four stacks of HBM generation 2 (HBM2) for a total of 32GB with a 4,096bit-wide memory interface. The peak memory bandwidth of the attached HBM2 is 1.228 TB/sec at a memory clock frequency of 1.2 GHz. The execution units of the GPU are depicted in the above image as Compute Units (CU). There are a total 120 compute units that are physically organized into eight Shader Engines (SE) with fifteen compute units per shader engine. Each compute unit is further sub-divided into four SIMD units that process SIMD instructions of 16 data elements per instruction. This enables the CU to process 64 data elements (a so-called 'wavefront') at a peak clock frequency of 1.5 GHz. Therefore, the theoretical maximum FP64 peak performance is 11.5 TFLOPS (`4 [SIMD units] x 16 [elements per instruction] x 120 [CU] x 1.5 [GHz]`). ![Block diagram of an MI100 compute unit with detailed SIMD view of the AMD CDNA architecture](../../data/conceptual/gpu-arch/image006.png "An MI100 compute unit with detailed SIMD view of the AMD CDNA architecture") The preceding image shows the block diagram of a single CU of an AMD Instinct™ MI100 GPU and summarizes how instructions flow through the execution engines. The CU fetches the instructions via a 32KB instruction cache and moves them forward to execution via a dispatcher. The CU can handle up to ten wavefronts at a time and feed their instructions into the execution unit. The execution unit contains 256 vector general-purpose registers (VGPR) and 800 scalar general-purpose registers (SGPR). The VGPR and SGPR are dynamically allocated to the executing wavefronts. A wavefront can access a maximum of 102 scalar registers. Excess scalar-register usage will cause register spilling and thus may affect execution performance. A wavefront can occupy any number of VGPRs from 0 to 256, directly affecting occupancy; that is, the number of concurrently active wavefronts in the CU. For instance, with 119 VGPRs used, only two wavefronts can be active in the CU at the same time. With the instruction latency of four cycles per SIMD instruction, the occupancy should be as high as possible such that the compute unit can improve execution efficiency by scheduling instructions from multiple wavefronts. :::{table} Peak-performance capabilities of MI100 for different data types. :name: mi100-perf | Computation and Data Type | FLOPS/CLOCK/CU | Peak TFLOPS | | :------------------------ | :------------: | ----------: | | Vector FP64 | 64 | 11.5 | | Matrix FP32 | 256 | 46.1 | | Vector FP32 | 128 | 23.1 | | Matrix FP16 | 1024 | 184.6 | | Matrix BF16 | 512 | 92.3 | ::: --- --- myst: html_meta: "description lang=en": "Learn about the AMD Instinct MI250 Series architecture." "keywords": "Instinct, MI250, microarchitecture, AMD, ROCm" --- # AMD Instinct™ MI250 microarchitecture The microarchitecture of the AMD Instinct MI250 GPU is based on the AMD CDNA 2 architecture that targets compute applications such as HPC, artificial intelligence (AI), and machine learning (ML) and that run on everything from individual servers to the world’s largest exascale supercomputers. The overall system architecture is designed for extreme scalability and compute performance. The following image shows the components of a single Graphics Compute Die (GCD) of the CDNA 2 architecture. On the top and the bottom are AMD Infinity Fabric™ interfaces and their physical links that are used to connect the GPU die to the other system-level components of the node (see also Section 2.2). Both interfaces can drive four AMD Infinity Fabric links. One of the AMD Infinity Fabric links of the controller at the bottom can be configured as a PCIe link. Each of the AMD Infinity Fabric links between GPUs can run at up to 25 GT/sec, which correlates to a peak transfer bandwidth of 50 GB/sec for a 16-wide link ( two bytes per transaction). Section 2.2 has more details on the number of AMD Infinity Fabric links and the resulting transfer rates between the system-level components. To the left and the right are memory controllers that attach the High Bandwidth Memory (HBM) modules to the GCD. AMD Instinct MI250 GPUs use HBM2e, which offers a peak memory bandwidth of 1.6 TB/sec per GCD. The execution units of the GPU are depicted in the following image as Compute Units (CU). The MI250 GCD has 104 active CUs. Each compute unit is further subdivided into four SIMD units that process SIMD instructions of 16 data elements per instruction (for the FP64 data type). This enables the CU to process 64 work items (a so-called “wavefront”) at a peak clock frequency of 1.7 GHz. Therefore, the theoretical maximum FP64 peak performance per GCD is 22.6 TFLOPS for vector instructions. This equates to 45.3 TFLOPS for vector instructions for both GCDs together. The MI250 compute units also provide specialized execution units (also called matrix cores), which are geared toward executing matrix operations like matrix-matrix multiplications. For FP64, the peak performance of these units amounts to 90.5 TFLOPS. ![Structure of a single GCD in the AMD Instinct MI250 GPU.](../../data/conceptual/gpu-arch/image001.png "Structure of a single GCD in the AMD Instinct MI250 GPU.") ```{list-table} Peak-performance capabilities of the MI250 OAM for different data types. :header-rows: 1 :name: mi250-perf-table * - Computation and Data Type - FLOPS/CLOCK/CU - Peak TFLOPS * - Matrix FP64 - 256 - 90.5 * - Vector FP64 - 128 - 45.3 * - Matrix FP32 - 256 - 90.5 * - Packed FP32 - 256 - 90.5 * - Vector FP32 - 128 - 45.3 * - Matrix FP16 - 1024 - 362.1 * - Matrix BF16 - 1024 - 362.1 * - Matrix INT8 - 1024 - 362.1 ``` The above table summarizes the aggregated peak performance of the AMD Instinct MI250 Open Compute Platform (OCP) Open Accelerator Modules (OAMs) and its two GCDs for different data types and execution units. The middle column lists the peak performance (number of data elements processed in a single instruction) of a single compute unit if a SIMD (or matrix) instruction is being retired in each clock cycle. The third column lists the theoretical peak performance of the OAM module. The theoretical aggregated peak memory bandwidth of the GPU is 3.2 TB/sec (1.6 TB/sec per GCD). ![Dual-GCD architecture of the AMD Instinct MI250 GPUs](../../data/conceptual/gpu-arch/image002.png "Dual-GCD architecture of the AMD Instinct MI250 GPUs") The following image shows the block diagram of an OAM package that consists of two GCDs, each of which constitutes one GPU device in the system. The two GCDs in the package are connected via four AMD Infinity Fabric links running at a theoretical peak rate of 25 GT/sec, giving 200 GB/sec peak transfer bandwidth between the two GCDs of an OAM, or a bidirectional peak transfer bandwidth of 400 GB/sec for the same. ## Node-level architecture The following image shows the node-level architecture of a system that is based on the AMD Instinct MI250 GPU. The MI250 OAMs attach to the host system via PCIe Gen 4 x16 links (yellow lines). Each GCD maintains its own PCIe x16 link to the host part of the system. Depending on the server platform, the GCD can attach to the AMD EPYC processor directly or via an optional PCIe switch . Note that some platforms may offer an x8 interface to the GCDs, which reduces the available host-to-GPU bandwidth. ![Block diagram of AMD Instinct MI250 GPUs with 3rd Generation AMD EPYC processor](../../data/conceptual/gpu-arch/image003.png "Block diagram of AMD Instinct MI250 GPUs with 3rd Generation AMD EPYC processor") The preceding image shows the node-level architecture of a system with AMD EPYC processors in a dual-socket configuration and four AMD Instinct MI250 GPUs. The MI250 OAMs attach to the host processors system via PCIe Gen 4 x16 links (yellow lines). Depending on the system design, a PCIe switch may exist to make more PCIe lanes available for additional components like network interfaces and/or storage devices. Each GCD maintains its own PCIe x16 link to the host part of the system or to the PCIe switch. Please note, some platforms may offer an x8 interface to the GCDs, which will reduce the available host-to-GPU bandwidth. Between the OAMs and their respective GCDs, a peer-to-peer (P2P) network allows for direct data exchange between the GPU dies via AMD Infinity Fabric links ( black, green, and red lines). Each of these 16-wide links connects to one of the two GPU dies in the MI250 OAM and operates at 25 GT/sec, which corresponds to a theoretical peak transfer rate of 50 GB/sec per link (or 100 GB/sec bidirectional peak transfer bandwidth). The GCD pairs 2 and 6 as well as GCDs 0 and 4 connect via two XGMI links, which is indicated by the thicker red line in the preceding image. --- --- myst: html_meta: "description lang=en": "Learn about the AMD Instinct MI300 Series architecture." "keywords": "Instinct, MI300X, MI300A, microarchitecture, AMD, ROCm" --- # AMD Instinct™ MI300 Series microarchitecture The AMD Instinct MI300 Series GPUs are based on the AMD CDNA 3 architecture which was designed to deliver leadership performance for HPC, artificial intelligence (AI), and machine learning (ML) workloads. The AMD Instinct MI300 Series GPUs are well-suited for extreme scalability and compute performance, running on everything from individual servers to the world’s largest exascale supercomputers. With the MI300 Series, AMD is introducing the Accelerator Complex Die (XCD), which contains the GPU computational elements of the processor along with the lower levels of the cache hierarchy. The following image depicts the structure of a single XCD in the AMD Instinct MI300 GPU Series. ```{figure} ../../data/shared/xcd-sys-arch.png --- name: mi300-xcd align: center --- XCD-level system architecture showing 40 Compute Units, each with 32 KB L1 cache, a Unified Compute System with 4 ACE Compute Accelerators, shared 4MB of L2 cache and an HWS Hardware Scheduler. ``` On the XCD, four Asynchronous Compute Engines (ACEs) send compute shader workgroups to the Compute Units (CUs). The XCD has 40 CUs: 38 active CUs at the aggregate level and 2 disabled CUs for yield management. The CUs all share a 4 MB L2 cache that serves to coalesce all memory traffic for the die. With less than half of the CUs of the AMD Instinct MI200 Series compute die, the AMD CDNA™ 3 XCD die is a smaller building block. However, it uses more advanced packaging and the processor can include 6 or 8 XCDs for up to 304 CUs, roughly 40% more than MI250X. The MI300 Series integrate up to 8 vertically stacked XCDs, 8 stacks of High-Bandwidth Memory 3 (HBM3) and 4 I/O dies (containing system infrastructure) using the AMD Infinity Fabric™ technology as interconnect. The Matrix Cores inside the CDNA 3 CUs have significant improvements, emphasizing AI and machine learning, enhancing throughput of existing data types while adding support for new data types. CDNA 2 Matrix Cores support FP16 and BF16, while offering INT8 for inference. Compared to MI250X GPUs, CDNA 3 Matrix Cores triple the performance for FP16 and BF16, while providing a performance gain of 6.8 times for INT8. FP8 has a performance gain of 16 times compared to FP32, while TF32 has a gain of 4 times compared to FP32. ```{list-table} Peak-performance capabilities of the MI300X for different data types. :header-rows: 1 :name: mi300x-perf-table * - Computation and Data Type - FLOPS/CLOCK/CU - Peak TFLOPS * - Matrix FP64 - 256 - 163.4 * - Vector FP64 - 128 - 81.7 * - Matrix FP32 - 256 - 163.4 * - Vector FP32 - 256 - 163.4 * - Vector TF32 - 1024 - 653.7 * - Matrix FP16 - 2048 - 1307.4 * - Matrix BF16 - 2048 - 1307.4 * - Matrix FP8 - 4096 - 2614.9 * - Matrix INT8 - 4096 - 2614.9 ``` The above table summarizes the aggregated peak performance of the AMD Instinct MI300X Open Compute Platform (OCP) Open Accelerator Modules (OAMs) for different data types and command processors. The middle column lists the peak performance (number of data elements processed in a single instruction) of a single compute unit if a SIMD (or matrix) instruction is submitted in each clock cycle. The third column lists the theoretical peak performance of the OAM. The theoretical aggregated peak memory bandwidth of the GPU is 5.3 TB per second. The following image shows the block diagram of the APU (left) and the OAM package (right) both connected via AMD Infinity Fabric™ network on-chip. ```{figure} ../../data/conceptual/gpu-arch/image008.png --- name: mi300-arch alt: align: center --- MI300 Series system architecture showing MI300A (left) with 6 XCDs and 3 CCDs, while the MI300X (right) has 8 XCDs. ``` ## Node-level architecture ```{figure} ../../data/shared/mi300-node-level-arch.png --- name: mi300-node align: center --- MI300 Series node-level architecture showing 8 fully interconnected MI300X OAM modules connected to (optional) PCIEe switches via retimers and HGX connectors. ``` The image above shows the node-level architecture of a system with AMD EPYC processors in a dual-socket configuration and eight AMD Instinct MI300X GPUs. The MI300X OAMs attach to the host system via PCIe Gen 5 x16 links (yellow lines). The GPUs are using seven high-bandwidth, low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected 8-GPU system. --- (gpu-arch-documentation)= # GPU architecture documentation :::::{grid} 1 1 2 2 :gutter: 1 :::{grid-item-card} **AMD Instinct MI300 Series** Review hardware aspects of the AMD Instinct™ MI300 Series GPUs and the CDNA™ 3 architecture. * [AMD Instinct™ MI300 microarchitecture](./gpu-arch/mi300.md) * [AMD Instinct MI300/CDNA3 ISA](https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/instruction-set-architectures/amd-instinct-mi300-cdna3-instruction-set-architecture.pdf) * [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf) * [MI300 performance counters](./gpu-arch/mi300-mi200-performance-counters.rst) * [MI350 Series performance counters](./gpu-arch/mi350-performance-counters.rst) ::: :::{grid-item-card} **AMD Instinct MI200 Series** Review hardware aspects of the AMD Instinct™ MI200 Series GPUs and the CDNA™ 2 architecture. * [AMD Instinct™ MI250 microarchitecture](./gpu-arch/mi250.md) * [AMD Instinct MI200/CDNA2 ISA](https://www.amd.com/system/files/TechDocs/instinct-mi200-cdna2-instruction-set-architecture.pdf) * [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-business-docs/white-papers/amd-cdna2-white-paper.pdf) * [Performance counters](./gpu-arch/mi300-mi200-performance-counters.rst) ::: :::{grid-item-card} **AMD Instinct MI100** Review hardware aspects of the AMD Instinct™ MI100 Series GPUs and the CDNA™ 1 architecture. * [AMD Instinct™ MI100 microarchitecture](./gpu-arch/mi100.md) * [AMD Instinct MI100/CDNA1 ISA](https://www.amd.com/system/files/TechDocs/instinct-mi100-cdna1-shader-instruction-set-architecture%C2%A0.pdf) * [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-business-docs/white-papers/amd-cdna-white-paper.pdf) ::: :::{grid-item-card} **RDNA** * [AMD RDNA4 ISA](https://www.amd.com/content/dam/amd/en/documents/radeon-tech-docs/instruction-set-architectures/rdna4-instruction-set-architecture.pdf) * [AMD RDNA3 ISA](https://www.amd.com/system/files/TechDocs/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf) * [AMD RDNA2 ISA](https://www.amd.com/system/files/TechDocs/rdna2-shader-instruction-set-architecture.pdf) * [AMD RDNA ISA](https://www.amd.com/system/files/TechDocs/rdna-shader-instruction-set-architecture.pdf) ::: :::{grid-item-card} **Older architectures** * [AMD Instinct MI50/Vega 7nm ISA](https://www.amd.com/system/files/TechDocs/vega-7nm-shader-instruction-set-architecture.pdf) * [AMD Instinct MI25/Vega ISA](https://www.amd.com/system/files/TechDocs/vega-shader-instruction-set-architecture.pdf) * [AMD GCN3 ISA](https://www.amd.com/system/files/TechDocs/gcn3-instruction-set-architecture.pdf) * [AMD Vega Architecture White Paper](https://en.wikichip.org/w/images/a/a1/vega-whitepaper.pdf) ::: ::::: --- # GPU isolation techniques Restricting the access of applications to a subset of GPUs, aka isolating GPUs allows users to hide GPU resources from programs. The programs by default will only use the "exposed" GPUs ignoring other (hidden) GPUs in the system. There are multiple ways to achieve isolation of GPUs in the ROCm software stack, differing in which applications they apply to and the security they provide. This page serves as an overview of the techniques. ## Environment variables The runtimes in the ROCm software stack read these environment variables to select the exposed or default device to present to applications using them. Environment variables shouldn't be used for isolating untrusted applications, as an application can reset them before initializing the runtime. ### `ROCR_VISIBLE_DEVICES` A list of device indices or {abbr}`UUID (universally unique identifier)`s that will be exposed to applications. Runtime : ROCm Software Runtime. Applies to all applications using the user mode ROCm software stack. ```{code-block} shell :caption: Example to expose the 1. device and a device based on UUID. export ROCR_VISIBLE_DEVICES="0,GPU-4b2c1a9f-8d3e-6f7a-b5c9-2e4d8a1f6c3b" ``` ### `GPU_DEVICE_ORDINAL` Devices indices exposed to OpenCL and HIP applications. Runtime : ROCm Compute Language Runtime (`ROCclr`). Applies to applications and runtimes using the `ROCclr` abstraction layer including HIP and OpenCL applications. ```{code-block} shell :caption: Example to expose the 1. and 3. device in the system. export GPU_DEVICE_ORDINAL="0,2" ``` (hip_visible_devices)= ### `HIP_VISIBLE_DEVICES` Device indices exposed to HIP applications. Runtime: HIP runtime. Applies only to applications using HIP on the AMD platform. ```{code-block} shell :caption: Example to expose the 1. and 3. devices in the system. export HIP_VISIBLE_DEVICES="0,2" ``` ### `CUDA_VISIBLE_DEVICES` Provided for CUDA compatibility, has the same effect as `HIP_VISIBLE_DEVICES` on the AMD platform. Runtime : HIP or CUDA Runtime. Applies to HIP applications on the AMD or NVIDIA platform and CUDA applications. ### `OMP_DEFAULT_DEVICE` Default device used for OpenMP target offloading. Runtime : OpenMP Runtime. Applies only to applications using OpenMP offloading. ```{code-block} shell :caption: Example on setting the default device to the third device. export OMP_DEFAULT_DEVICE="2" ``` ## Docker Docker uses Linux kernel namespaces to provide isolated environments for applications. This isolation applies to most devices by default, including GPUs. To access them in containers explicit access must be granted, please see {ref}`docker-access-gpus-in-container` for details. Specifically refer to {ref}`docker-restrict-gpus` on exposing just a subset of all GPUs. Docker isolation is more secure than environment variables, and applies to all programs that use the `amdgpu` kernel module interfaces. Even programs that don't use the ROCm runtime, like graphics applications using OpenGL or Vulkan, can only access the GPUs exposed to the container. ## GPU passthrough to virtual machines Virtual machines achieve the highest level of isolation, because even the kernel of the virtual machine is isolated from the host. Devices physically installed in the host system can be passed to the virtual machine using PCIe passthrough. This allows for using the GPU with a different operating systems like a Windows guest from a Linux host. Setting up PCIe passthrough is specific to the hypervisor used. ROCm officially supports [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) for select GPUs. --- # Building documentation ## GitHub If you open a pull request and scroll down to the summary panel, there is a commit status section. Next to the line `docs/readthedocs.com:advanced-micro-devices-demo`, there is a `Details` link. If you click this, it takes you to the Read the Docs build for your pull request. ![GitHub PR commit status](../data/contribute/commit-status.png) If you don't see this line, click `Show all checks` to get an itemized view. ## Command line You can build our documentation via the command line using Python. See the `build.tools.python` setting in the [Read the Docs configuration file](https://github.com/ROCm/ROCm/blob/develop/.readthedocs.yaml) for the Python version used by Read the Docs to build documentation. See the [Python requirements file](https://github.com/ROCm/ROCm/blob/develop/docs/sphinx/requirements.txt) for Python packages needed to build the documentation. Use the Python Virtual Environment (`venv`) and run the following commands from the project root: ::::{tab-set} :::{tab-item} Linux and WSL :sync: linux ```sh python3 -mvenv .venv .venv/bin/python -m pip install -r docs/sphinx/requirements.txt .venv/bin/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html ``` ::: :::{tab-item} Windows :sync: windows ```powershell python -mvenv .venv .venv\Scripts\python.exe -m pip install -r docs/sphinx/requirements.txt .venv\Scripts\python.exe -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html ``` ::: :::: Navigate to `_build/html/index.html` and open this file in a web browser. ## Visual Studio Code With the help of a few extensions, you can create a productive environment to author and test documentation locally using Visual Studio (VS) Code. Follow these steps to configure VS Code: 1. Install the required extensions: * Python: `(ms-python.python)` * Live Server: `(ritwickdey.LiveServer)` 2. Add the following entries to `.vscode/settings.json`. ```json { "liveServer.settings.root": "/.vscode/build/html", "liveServer.settings.wait": 1000, "python.terminal.activateEnvInCurrentTerminal": true } ``` * `liveServer.settings.root`: Sets the root of the output website for live previews. Must be changed alongside the `tasks.json` command. * `liveServer.settings.wait`: Tells the live server to wait with the update in order to give Sphinx time to regenerate the site contents and not refresh before the build is complete. * `python.terminal.activateEnvInCurrentTerminal`: Activates the automatic virtual environment, so you can build the site from the integrated terminal. 3. Add the following tasks to `.vscode/tasks.json`. ```json { "version": "2.0.0", "tasks": [ { "label": "Build Docs", "type": "process", "windows": { "command": "${workspaceFolder}/.venv/Scripts/python.exe" }, "command": "${workspaceFolder}/.venv/bin/python3", "args": [ "-m", "sphinx", "-j", "auto", "-T", "-b", "html", "-d", "${workspaceFolder}/.vscode/build/doctrees", "-D", "language=en", "${workspaceFolder}/docs", "${workspaceFolder}/.vscode/build/html" ], "problemMatcher": [ { "owner": "sphinx", "fileLocation": "absolute", "pattern": { "regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):(\\d+):\\s+(WARNING|ERROR):\\s+(.*)$", "file": 1, "line": 2, "severity": 3, "message": 4 } }, { "owner": "sphinx", "fileLocation": "absolute", "pattern": { "regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):{1,2}\\s+(WARNING|ERROR):\\s+(.*)$", "file": 1, "severity": 2, "message": 3 } } ], "group": { "kind": "build", "isDefault": true } } ] } ``` > Implementation detail: two problem matchers were needed to be defined, > because VS Code doesn't tolerate some problem information being potentially > absent. While a single regex could match all types of errors, if a capture > group remains empty (the line number doesn't show up in all warning/error > messages) but the `pattern` references said empty capture group, VS Code > discards the message completely. 4. Configure the Python virtual environment (`venv`). From the Command Palette, run `Python: Create Environment`. Select `venv` environment and `docs/sphinx/requirements.txt`. 5. Build the docs. Launch the default build task using one of the following options: * A hotkey (the default is `Ctrl+Shift+B`) * Issuing the `Tasks: Run Build Task` from the Command Palette 6. Open the live preview. Navigate to the site output within VS Code: right-click on `.vscode/build/html/index.html` and select `Open with Live Server`. The contents should update on every rebuild without having to refresh the browser. --- # Contributing to the ROCm documentation The ROCm documentation, like all of ROCm, is open source and available on GitHub. You can contribute to the ROCm documentation by forking the appropriate repository, making your changes, and opening a pull request. To provide feedback on the ROCm documentation, including submitting an issue or suggesting a feature, see [Providing feedback about the ROCm documentation](./feedback.md). ## The ROCm repositories The repositories for ROCm and all ROCm components are available on GitHub. | Module | Documentation location | | --- | --- | | ROCm framework | [https://github.com/ROCm/ROCm/tree/develop/docs](https://github.com/ROCm/ROCm/tree/develop/docs) | | ROCm installation for Linux | [https://github.com/ROCm/rocm-install-on-linux/tree/develop/docs](https://github.com/ROCm/rocm-install-on-linux/tree/develop/docs) | | ROCm HIP SDK installation for Windows | [https://github.com/ROCm/rocm-install-on-windows/tree/develop/docs](https://github.com/ROCm/rocm-install-on-windows/tree/develop/docs) | Individual components have their own repositories with their own documentation in their own `docs` folders. The sub-folders within the `docs` folders across ROCm are typically structured as follows: | Sub-folder name | Documentation type | |-------|----------| | `install` | Installation instructions, build instructions, and prerequisites | | `conceptual` | Important concepts | | `how-to` | How to implement specific use cases | | `tutorials` | Tutorials | | `reference` | API references and other reference resources | ## Editing and adding to the documentation ROCm documentation follows the [Google developer documentation style guide](https://developers.google.com/style/highlights). Most topics in the ROCm documentation are written in [reStructuredText (rst)](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html), with some topics written in Markdown. Only use reStructuredText when adding new topics. Only use Markdown if the topic you are editing is already in Markdown. To edit or add to the documentation: 1. Fork the repository you want to add to or edit. 2. Clone your fork locally. 3. Create a new local branch cut from the `develop` branch of the repository. 4. Make your changes to the documentation. 5. Optionally, build the documentation locally before creating a pull request by running the following commands from within the `docs` folder: ```bash pip3 install -r sphinx/requirements.txt # You only need to run this command once python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html ``` The output files will be located in the `docs/_build` folder. Open `docs/_build/html/index.html` to view the documentation. For more information on ROCm build tools, see [Documentation toolchain](toolchain.md). 6. Push your changes. A GitHub link will be returned in the output of the `git push` command. Open this link in a browser to create the pull request. The documentation is built as part of the checks on pull request, along with spell checking and linting. Scroll to the bottom of your pull request to view all the checks. Verify that the linting and spell checking have passed, and that the documentation was built successfully. New words or acronyms can be added to the [wordlist file](https://github.com/ROCm/rocm-docs-core/blob/develop/.wordlist.txt). The wordlist is subject to approval by the ROCm documentation team. The Read The Docs build of your pull request can be accessed by clicking on the Details link next to the Read The Docs build check. Verify that your changes are in the build and look as expected. ![The GitHub checks are collapsed by default and can be accessed by clicking on "Show All Checks".](../data/contribute/GitHubCheck-Highlight.png) ![The Read The Docs Build is accessed from the Details link in the Read The Docs check.](../data/contribute/GitHub-ReadThe-Docs-Highlight.png) Your pull request will be reviewed by a member of the ROCm documentation team. See the [GitHub documentation](https://docs.github.com/en) for information on how to fork and clone a repository, and how to create and push a local branch. ```{important} By creating a pull request (PR), you agree to allow your contribution to be licensed under the terms of the LICENSE.txt file in the corresponding repository. Different repositories can use different licenses. ``` --- # Providing feedback about the ROCm documentation Feedback about the ROCm documentation is welcome. You can provide feedback about the ROCm documentation either through GitHub Discussions or GitHub Issues. ## Participating in discussions through GitHub Discussions You can ask questions, view announcements, suggest new features, and communicate with other members of the community through [GitHub Discussions](https://github.com/ROCm/ROCm/discussions). ## Submitting issues through GitHub Issues You can submit issues through [GitHub Issues](https://github.com/ROCm/ROCm/issues). When creating a new issue, follow the following guidelines: 1. Always do a search to see if the same issue already exists. If the issue already exists, upvote it, and comment or post to provide any additional details you might have. 2. If you find an issue that is similar to your issue, log your issue, then add a comment that includes a link to the similar issue, as well as its issue number. 3. Always provide as much information as possible. This helps reduce the time required to reproduce the issue. After creating your issue, make sure to check it regularly for any requests for additional information. For information about contributing content to the ROCm documentation, see [Contributing to the ROCm documentation](./contributing.md). --- # ROCm documentation toolchain The ROCm documentation relies on several open source toolchains and sites. ## rocm-docs-core [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) is an AMD-maintained project that applies customizations for the ROCm documentation. This project is the tool most ROCm repositories use as part of their documentation build pipeline. It is available as a [pip package on PyPI](https://pypi.org/project/rocm-docs-core/). See the user and developer guides for rocm-docs-core at {doc}`rocm-docs-core documentation`. ## Sphinx [Sphinx](https://www.sphinx-doc.org/en/master/) is a documentation generator originally used for Python. It is now widely used in the open source community. ### Sphinx External ToC [Sphinx External ToC](https://sphinx-external-toc.readthedocs.io/en/latest/intro.html) is a Sphinx extension used for ROCm documentation navigation. This tool generates a navigation menu on the left based on a YAML file (`_toc.yml.in`) that contains the table of contents. ### Sphinx-book-theme [Sphinx-book-theme](https://sphinx-book-theme.readthedocs.io/en/latest/) is a Sphinx theme that defines the base appearance for ROCm documentation. ROCm documentation applies some customization, such as a custom header and footer, on top of the Sphinx Book Theme. ### Sphinx Design [Sphinx design](https://sphinx-design.readthedocs.io/en/latest/index.html) is a Sphinx extension that adds design functionality. ROCm documentation uses Sphinx Design for grids, cards, and synchronized tabs. ## Doxygen [Doxygen](https://www.doxygen.nl/) is a documentation generator that extracts information from in-code comments. It is used for API documentation. ## Breathe [Breathe](https://www.breathe-doc.org/) is a Sphinx plugin for integrating Doxygen content. ## Read the Docs [Read the Docs](https://docs.readthedocs.io/en/stable/) is the service that builds and hosts the HTML version of the ROCm documentation. --- --- myst: html_meta: "description": "How to optimize machine learning workloads with Composable Kernel (CK)." "keywords": "mixed, precision, kernel, inference, linear, algebra, ck, GEMM" --- # Optimizing with Composable Kernel The AMD ROCm Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads. It generates a general-purpose kernel during the compilation phase through a C++ template, enabling developers to achieve operation fusions on different data precisions. This article gives a high-level overview of CK General Matrix Multiplication (GEMM) kernel based on the design example of `03_gemm_bias_relu`. It also outlines the steps to construct the kernel and run it. Moreover, the article provides a detailed implementation of running SmoothQuant quantized INT8 models on AMD Instinct MI300X GPUs using CK. ## High-level overview: a CK GEMM instance GEMM is a fundamental block in linear algebra, machine learning, and deep neural networks. It is defined as the operation: {math}`E = α \times (A \times B) + β \times (D)`, with A and B as matrix inputs, α and β as scalar inputs, and D as a pre-existing matrix. Take the commonly used linear transformation in a fully connected layer as an example. These terms correspond to input activation (A), weight (B), bias (D), and output (E), respectively. The example employs a `DeviceGemmMultipleD_Xdl_CShuffle` struct from CK library as the fundamental instance to explore the compute capability of AMD Instinct GPUs for the computation of GEMM. The implementation of the instance contains two phases: - [Template parameter definition](#template-parameter-definition) - [Instantiating and running the templated kernel](#instantiating-and-running-the-templated-kernel) ### Template parameter definition The template parameters of the instance are grouped into four parameter types: - [Parameters for determining matrix data precision](matrix-data-precision) - [Parameters for determining matrix data layout](matrix-data-layout) - [Parameters for determining extra operations on matrix elements](matrix-element-operation) - [Performance-oriented tunable parameters](tunable-parameters) ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-template_parameters.jpg The template parameters of the selected GEMM kernel are classified into four groups. These template parameter groups should be defined properly before running the instance. ``` (matrix-data-precision)= #### Matrix data precision A, B, D, and E are defined as half-precision floating-point datatypes. The multiply-add results of matrix A and B are added with a pre-existing matrix D (half-precision), and the final GEMM results are also half-precision floating-points. ```c++ using ADataType = F16; using BDataType = F16; using AccDataType = F32; using CShuffleDataType = F16; using DDataType = F16; using EDataType = F16; ``` `ADataType` and `BDataType` denote the data precision of the A and B input matrices. `AccDataType` determines the data precision used for representing the multiply-add results of A and B elements. These results are stored in a `CShuffle` module in local data share (LDS), a low-latency and high-bandwidth explicitly-addressed memory used for synchronization within a workgroup LDS for later use. `CShuffleDataType` denotes the data precision of `CShuffle` in LDS. `DDataType` denotes the data precision of the pre-existing D matrix stored in GPU global memory, while `EDatatype` denotes the data precision of the final output. The CK kernel supports a fusion strategy so that `CShuffle` can be added with a single pre-existing matrix in the same GPU kernel for better performance. (matrix-data-layout)= #### Matrix data layout ```c++ using ALayout = Row; using BLayout = Col; using DLayout = Row; using ELayout = Row; ``` Following the convention of various linear algebra libraries, CK assumes that the input matrix A is an M x K matrix, meaning the matrix has M rows and K columns. Similarly, matrix B is assumed to be K x N, meaning it has K rows and N columns. In computing, row-major order and column-major order are commonly used ways to store matrices in linear storage. After understanding the matrix storage pattern, the underlying optimized memory access manner can be applied to achieve better performance depending on the storage ordering of these matrices. (matrix-element-operation)= #### Matrix element operation ```c++ using AElementOp = PassThrough; using BElementOp = PassThrough; using CDEElementOp = AddRelu; ``` CK supports the pre-processing of the matrix before calculating GEMM, that is, `C = AElementOp(A) * BElementOp(B)`. It similarly supports the post-processing of GEMM results the same way, that is, `E = CDEElementOp(C, D)`. `AElementOp` and `BElementOp` determine the operation applied to matrix A and B separately before GEMM, which is achieved by binding the operation with a C++ struct function. The above `PassThrough` denotes no operations are performed on the target matrix. `CDEELementOp` determines the operations applied to `CShuffle` output and matrix D. The following binding struct `AddRelu` shows an example of adding the `CShuffle` output and matrix D, and ReLU (Rectified Linear Unit) operations to the addition result. It then passes the results to matrix E. ```c++ struct AddRelu { __host__ __device__ void operator()(ck::half_t& e, const ck::half_t& c, const ck::half_t& d) const { const ck::half_t x = c + d; e = x > 0 ? x : 0; } }; ``` (tunable-parameters)= #### Tunable parameters The CK instance includes a series of tunable template parameters to control the parallel granularity of the workload to achieve load balancing on different hardware platforms. These parameters include Block Size, M/N/K Per Block, M/N per XDL, AK1, BK1, etc. - Block Size determines the number of threads in the thread block. - M/N/K Per Block determines the size of tile that each thread block is responsible for calculating. - M/N Per XDL refers to M/N size for Instinct GPU Matrix Fused Multiply Add (MFMA) instructions operating on a per-wavefront basis. - A/B K1 is related to the data type. It can be any value ranging from 1 to K Per Block. To achieve the optimal load/store performance, 128bit per load is suggested. In addition, the A/B loading parameters must be changed accordingly to match the A/B K1 value; otherwise, it will result in compilation errors. Conditions for achieving computational load balancing on different hardware platforms can vary. ### Instantiating and running the templated kernel After determining the template parameters, we instantiate the kernel with actual arguments. Do one of the following: - Use `GetDeviceBuffer` from CK’s custom struct `DeviceMem` to pass the element values of the matrices that need to be calculated. - Allocate device buffer via `hipMalloc`. Ensure the device buffer size can fit the matrix size. - Pass matrix elements through the `data_ptr` method in the `Tensor` object if the matrix to be calculated is of `Tensor` type. The row and column, and stride information of input matrices are also passed to the instance. For batched GEMM, you must pass in additional batch count and batch stride values. The extra operations for pre and post-processing are also passed with an actual argument; for example, α and β for GEMM scaling operations. Afterward, the instantiated kernel is launched by the invoker, as illustrated in Figure 3. ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-kernel_launch.jpg Templated kernel launching consists of kernel instantiation, making arguments by passing in actual application parameters, creating an invoker, and running the instance through the invoker. ``` ## Developing fused INT8 kernels for SmoothQuant models [SmoothQuant](https://github.com/mit-han-lab/smoothquant) (SQ) is a quantization algorithm that enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLM. The required GPU kernel functionalities used to accelerate the inference of SQ models on Instinct GPUs are shown in the following table. :::{table} Functionalities used to implement SmoothQuant model inference. | Functionality descriptions | Corresponding wrappers | |:-------------------------------------|-----------------------------------------| | {math}`E = α \times (A \times B) + β \times (D)`, where A, B, D, E are INT8 2-D tensors; | E = Linear_ABDE_I8(A, B, D, {math}`\alpha`, {math}`\beta`) | | {math}`E = RELU (α \times (A \times B) + β \times (D))`, where A, B, D, E are INT8 2-D tensors; | E = Linear_ReLU_ABDE_I8(A, B, D, {math}`\alpha`, {math}`\beta`) | | {math}`E = α \times (A \times B) + β \times (D)`, where A, B are INT8 2-D tensors, D and E are FP32 2-D tensors; | E = Linear_AB_I8_DE_F32(A, B, D, {math}`\alpha`, {math}`\beta`) | | {math}`E = α \times (A \times B)`, where A, B, E are INT8 3-D tensors; | E = BMM_ABE_I8(A, B, {math}`\alpha`) | | {math}`E = α \times (A \times B)`, where A, B are INT8 3-D tensors, E is FP32 3-D tensor; | E = BMM_AB_I8_E_F32(A, B, {math}`\alpha`) | ::: ### Operation flow analysis The following section discusses the analysis of the operation flow of `Linear_ReLU_ABDE_I8`. The rest of the wrappers in Table 1 can be analyzed similarly. The first operation in the process is to perform the multiplication of input matrices A and B. The resulting matrix C is then scaled with α to obtain T1. At the same time, the process performs a scaling operation on D elements to obtain T2. Afterward, the process performs matrix addition between T1 and T2, element activation calculation using ReLU, and element rounding sequentially. The operations to generate E1, E2, and E are encapsulated and completed by a user-defined template function in CK (given in the next sub-section). This template function is integrated into the fundamental instance directly during the compilation phase so that all these steps can be fused in a single GPU kernel. ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-operation_flow.jpg Operation flow. ``` The CK library contains many fundamental instances that implement different functions. Familiarize yourself with the names of various CK instances and determine whether they meet the target functional requirements. Second, consider whether the format of input data meets your actual calculation needs. For SQ models, the 8-bit integer data format (INT8) is applied for matrix calculations. Third, consider the platform for implementing CK instances. The instances suffixed with `xdl` only run on AMD Instinct GPUs after being compiled and cannot run on Radeon-Series GPUs. This is due to the underlying device-specific instruction sets for implementing these basic instances. Here, we use [DeviceBatchedGemmMultiD_Xdl](https://github.com/ROCm/composable_kernel/tree/develop/example/24_batched_gemm) as the fundamental instance to implement the functionalities in the previous table. ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-root_instance.jpg Use the ‘DeviceBatchedGemmMultiD_Xdl’ instance as a root. ``` The `DeviceBatchedGemmMultiD_Xdl` instance realizes the batched GEMM `BMM_ABE_I8` and `BMM_AB_I8_E_F32` kernels directly by using the proper input and output data precision types. Based on the two batched GEMM kernels, GEMM kernel `Linear_ABDE_I8` and `Linear_AB_I8_DE_F32` can be implemented by expanding their input 2-D tensors to 3-D tensors. Then, the 3-D output tensors produced by the root instance are squeezed back to 2-D output tensors before returning back. For example, unsqueeze A (M, K) to A (1, M, K) before assigning it into the root instance and squeeze E (1, M, N) to (M, N) after the calculations of the root instance return back. `Linear_ReLU_ABDE_I8` is implemented by adding a ReLU operation on the result output of `Linear_ABDE_I8`. ### Developing the complete function The inference of SQ quantized models relies on using PyTorch and Transformer libraries, and a tensor type is used to represent matrices and vectors in `torch`, the C++ data types in CK need to be replaced with the `torch::tensor` type. The data types of the input and output matrices should be a `tensor` type. In GEMM, the A and B inputs are two-dimensional matrices, and the required input matrices of the selected fundamental CK instance are three-dimensional matrices. Therefore, we must convert the input 2-D tensors to 3-D tensors, by using `tensor`'s `unsqueeze()` method before passing these matrices to the instance. For batched GEMM in the preceding table, ignore this step. ```c++ // Function input and output torch::Tensor linear_relu_abde_i8( torch::Tensor A_, torch::Tensor B_, torch::Tensor D_, float alpha, float beta) { // Convert torch::Tensor A_ (M, K) to torch::Tensor A (1, M, K) auto A = A_.unsqueeze(0); // Convert torch::Tensor B_ (K, N) to torch::Tensor A (1, K, N) auto B = B_.unsqueeze(0); ... ``` As shown in the following code block, we obtain M, N, and K values using input tensor size values. This stride size information is used to reshape the input vector D and allocate the storage space of tensor E. Stride reflects the exact size of continuous elements in memory, which are passed as important parameters to the fundamental instance for GPU kernel use. ```c++ // Return the batch count from the size of dimension 0 int batch_count = A.size(0); // Return the M, N, K from the size of dimension 1 & 2 int M = A.size(1); int N = B.size(1); int K = A.size(2); // Initialize the stride size for A, B, D and E int stride_A = K; int stride_B = K; int stride_D0 = N; int stride_E = N; // Initialize the stride size for batched A, B, D and E long long int batch_stride_A = M * K; long long int batch_stride_B = K * N; long long int batch_stride_D0 = M * N; long long int batch_stride_E = M * N; // Convert the tensor of 2-D to 3-D auto D = D_.view({1,-1}).repeat({M, 1}); // Allocate memory for E auto E = torch::empty({batch_count, M, N}, torch::dtype(torch::kInt8).device(A.device())); ``` In the following code block, `ADataType`, `BDataType` and `D0DataType` are used to denote the data precision of the input tensors A, B and D, respectively. `EDataType` is used to denote the data precision of output tensor E. These parameters are specified to `I8` data format (8-bit integer data format) to meet the kernel's design requirements. `AccDataType` determines the data precision used to represent the multiply-add results of A and B elements. Generally, a larger range data type is applied to store the multiply-add results of A and B to avoid result overflow; `I32` is applied in this case. The `CShuffleDataType I32` data type indicates that the multiply-add results continue to be stored in LDS as an `I32` data format. All of this is implemented through the following code block. ```c++ // Data precision using ADataType = I8; using BDataType = I8; using AccDataType = I32; using CShuffleDataType = I32; using D0DataType = I8; using DsDataType = ck::Tuple; using EDataType = I8; ``` Following the convention of various linear algebra libraries, row-major and column-major orders are used to denote the ways of storing matrices in linear storage. The advantage of specifying matrix B as column major is that all the relevant matrix elements are stored continuously in GPU global memory when a row in A is multiplied by a column in B, which can help GPU achieve data consistency access to improve access performance. ```c++ // Specify tensor order using ALayout = RowMajor; using BLayout = ColumnMajor; using D0Layout = RowMajor; using DsLayout = ck::Tuple; using ELayout = RowMajor; ``` In CK, `PassThrough` is a struct denoting if an operation is applied to the tensor it binds to. To fuse the operations between E1, E2, and E introduced in section [Operation flow analysis](#operation-flow-analysis), we define a custom C++ struct, `ScaleScaleAddRelu`, and bind it to `CDEELementOp`. It determines the operations that will be applied to `CShuffle` (A×B results), tensor D, α, and β. ```c++ // No operations bound to the elements of A and B using AElementOp = PassThrough; using BElementOp = PassThrough; // Operations bound to the elements of C, D and E using CDEElementOp = ScaleScaleAddRelu; ``` In the binding struct, `operator()` performs an addition operation between `CShuffle` and matrix D, a ReLU operation on the addition results, and a rounding operation on the output elements. It then returns the results to E. ```c++ struct ScaleScaleAddRelu { template <> __host__ __device__ constexpr void operator()(I8& e, const I32& c, const I8& d) const { // Scale AxB result with alpha const F32 c_scale = ck::type_convert(c) * alpha; // Scale D with beta const F32 d_scale = ck::type_convert(d) * beta; // Perform addition operation F32 temp = c_scale + d_scale; // Perform RELU operation temp = temp > 0 ? temp : 0; // Perform rounding operation temp = temp > 127 ? 127 : temp; // Return to E e = ck::type_convert(temp); } F32 alpha; F32 beta; }; ``` The original input tensors need to be padded to meet GPU tile-based parallelism. ```c++ static constexpr auto GemmDefault = ck::tensor_operation::device::GemmSpecialization::MNKPadding; ``` The template parameters of the target fundamental instance are initialized with the above parameters and includes default tunable parameters. For specific tuning methods, see [Tunable parameters](#tunable-parameters). ```c++ using DeviceOpInstance = ck::tensor_operation::device::DeviceBatchedGemmMultiD_Xdl< // Tensor layout ALayout, BLayout, DsLayout, ELayout, // Tensor data type ADataType, BDataType, AccDataType, CShuffleDataType, DsDataType, EDataType, // Tensor operation AElementOp, BElementOp, CDEElementOp, // Padding strategy GemmDefault, // Tunable parameters tunable parameters>; ``` Return the address of the first element of tensors: ```c++ auto A_ref = A.data_ptr(); auto B_ref = B.data_ptr(); auto D0_ref = D.data_ptr(); auto E_ref = E.data_ptr(); ``` The fundamental instance is then initialized and run with actual arguments: ```c++ auto device_op = DeviceOpInstance{}; auto invoker = device_op.MakeInvoker(); auto argument = device_op.MakeArgument( A_ref, B_ref, {D0_ref}, E_ref, M, N, K, batch_count, stride_A, stride_B, {stride_D0}, stride_E, batch_stride_A, batch_stride_B, {batch_stride_D0}, batch_stride_E, AElementOp{}, BElementOp{}, CDEElementOp{alpha, beta}); invoker.Run(argument, StreamConfig{nullptr, 0}); ``` The output of the fundamental instance is a calculated batched matrix E (batch, M, N). Before the return, it needs to be converted to a 2-D matrix if a normal GEMM result is required. ```c++ // Convert (1, M, N) to (M, N) return E.squeeze(0); ``` ### Binding to Python Since these functions are written in C++ and `torch::Tensor`, you can use `pybind11` to bind the functions and import them as Python modules. For the example, the necessary binding code for exposing the functions in the table spans but a few lines. ```c++ #include PYBIND11_MODULE(TORCH_EXTENSION_NAME, m){ m.def("linear_ab_i8_de_f32", &linear_ab_i8_de_f32); m.def("linear_relu_abde_i8", &linear_relu_abde_i8); m.def("linear_abde_i8", &linear_abde_i8); m.def("bmm_abe_i8", &bmm_abe_i8); m.def("bmm_ab_i8_e_f32", &bmm_ab_i8_e_f32); } ``` Build the C++ extension by writing a `setup.py` script that uses `setuptools` to compile the C++ code. A reference implementation of the `setup.py` script is as follows. ```python import os from setuptools import setup, find_packages from torch.utils import cpp_extension from torch.utils.cpp_extension import BuildExtension os.environ["CC"] = "hipcc" os.environ["CXX"] = "hipcc" sources = [ 'torch_int/kernels/linear.cpp', 'torch_int/kernels/bmm.cpp', 'torch_int/kernels/pybind.cpp', ] include_dirs = ['torch_int/kernels/include'] extra_link_args = ['libutility.a'] extra_compile_args = ['-O3','-DNDEBUG', '-std=c++17', '--offload-arch=gfx942', '-DCK_ENABLE_INT8', '-D__HIP_PLATFORM_AMD__=1'] setup( name='torch_int', ext_modules=[ cpp_extension.CUDAExtension( name='torch_int.rocm', sources=sources, include_dirs=include_dirs, extra_link_args=extra_link_args, extra_compile_args=extra_compile_args ), ], cmdclass={ 'build_ext': BuildExtension.with_options(use_ninja=False) }, packages=find_packages( exclude=['notebook', 'scripts', 'tests']), ) ``` Run `python setup.py install` to build and install the extension. It should look something like Figure 6: ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-compilation.jpg Compilation and installation of the INT8 kernels. ``` ### INT8 model inference and performance The implementation architecture of running SmoothQuant models on MI300X GPUs is illustrated in Figure 7, where (a) shows the decoder layer composition components of the target model, (b) shows the major implementation class for the decoder layer components, and \(c\) denotes the underlying GPU kernels implemented by CK instance. ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-inference_flow.jpg The implementation architecture of running SmoothQuant models on AMD MI300X GPUs. ``` For the target [SQ quantized model](https://huggingface.co/mit-han-lab/opt-13b-smoothquant), each decoder layer contains three major components: attention calculation, layer normalization, and linear transformation in fully connected layers. The corresponding implementation classes for these components are: - `Int8OPTAttention` - `W8A8B8O8LinearReLU` - `W8A8BF32OF32Linear` These classes' underlying implementation logits will harness the functions in previous table. Note that for the example, the `LayerNormQ` module is implemented by the torch native module. Testing environment: The hardware platform used for testing equips with 256 AMD EPYC 9534 64-Core Processor, 8 AMD Instinct MI300X GPUs and 1.5T memory. The testing was done in a publicly available Docker image from Docker Hub: [`rocm/pytorch:rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2`](https://hub.docker.com/layers/rocm/pytorch/rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2/images/sha256-f6ea7cee8aae299c7f6368187df7beed29928850c3929c81e6f24b34271d652b) The tested models are OPT-1.3B, 2.7B, 6.7B and 13B FP16 models and the corresponding SmoothQuant INT8 OPT models were obtained from Hugging Face. Note that since the default values were used for the tunable parameters of the fundamental instance, the performance of the INT8 kernel is suboptimal. Figure 8 shows the performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X GPU. The GPU memory footprints of SmoothQuant-quantized models are significantly reduced. It also indicates the per-sample inference latency is significantly reduced for all SmoothQuant-quantized OPT models (illustrated in (b)). Notably, the performance of the CK instance-based INT8 kernel steadily improves with an increase in model size. ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-comparisons.jpg Performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X GPU. ``` For accuracy comparisons between the original FP16 and INT8 models, the evaluation is done by using the first 1,000 samples from the LAMBADA dataset's validation set. We employ the same Last Token Prediction Accuracy method introduced in [SmoothQuant Real-INT8 Inference for PyTorch](https://github.com/mit-han-lab/smoothquant/blob/main/examples/smoothquant_opt_real_int8_demo.ipynb) as our evaluation metric. The comparison results are shown in Table 2. :::{table} The inference accuracy comparisons of SmoothQuant quantized models on Instinct MI300X. | Models | Hugging Face FP16 model accuracy | SmoothQuant quantized INT8 model accuracy | |:-----------------|----------------------------------------|---------------------------------------------| | opt-1.3B | 0.72 | 0.70 | | opt-2.7B | 0.76 | 0.75 | | opt-6.7B | 0.80 | 0.79 | | opt-13B | 0.79 | 0.77 | ::: ## Conclusion CK provides a rich set of template parameters for generating flexible accelerated computing kernels for difference application scenarios. CK supports multiple instruction sets of AMD Instinct GPUs, operator fusion and different data precisions. Its composability helps users quickly construct operator performance verification. With CK, you can build more effective AI applications with higher flexibility and better performance on different AMD GPU platforms. --- --- myst: html_meta: "description": "Learn more about common system-level debugging measures for ROCm." "keywords": "env, var, sys, PCIe, troubleshooting, admin, error" --- # System debugging ## ROCm language and system-level debug, flags, and environment variables Kernel options to avoid: the Ethernet port getting renamed every time you change graphics cards, `net.ifnames=0 biosdevname=0` ## ROCr error code * 2 Invalid Dimension * 4 Invalid Group Memory * 8 Invalid (or Null) Code * 32 Invalid Format * 64 Group is too large * 128 Out of VGPRs * 0x80000000 Debug Options ## Command to dump firmware version and get Linux kernel version `sudo cat /sys/kernel/debug/dri/1/amdgpu_firmware_info` `uname -a` ## Debug flags Debug messages when developing/debugging base ROCm driver. You could enable the printing from `libhsakmt.so` by setting an environment variable, `HSAKMT_DEBUG_LEVEL`. Available debug levels are 3-7. The higher level you set, the more messages will print. * `export HSAKMT_DEBUG_LEVEL=3` : Only pr_err() prints. * `export HSAKMT_DEBUG_LEVEL=4` : pr_err() and pr_warn() print. * `export HSAKMT_DEBUG_LEVEL=5` : We currently do not implement “notice”. Setting to 5 is same as setting to 4. * `export HSAKMT_DEBUG_LEVEL=6` : pr_err(), pr_warn(), and pr_info print. * `export HSAKMT_DEBUG_LEVEL=7` : Everything including pr_debug prints. ## ROCr level environment variables for debug `HSA_ENABLE_SDMA=0` `HSA_ENABLE_INTERRUPT=0` `HSA_SVM_GUARD_PAGES=0` `HSA_DISABLE_CACHE=1` ## Turn off page retry on GFX9/Vega devices `sudo -s` `echo 1 > /sys/module/amdkfd/parameters/noretry` ## HIP environment variables 3.x ### OpenCL debug flags `AMD_OCL_WAIT_COMMAND=1 (0 = OFF, 1 = On)` ## PCIe-debug For information on how to debug and profile HIP applications, see {doc}`hip:how-to/debugging` --- --- myst: html_meta: "description": "Learn about system settings and performance tuning for RDNA2-based GPUs." "keywords": "RDNA2, workstation, desktop, BIOS, installation, Radeon, pro, v620, w6000" --- :orphan: # AMD RDNA2 system optimization ## Workstation workloads Workstation workloads, much like those for HPC, have a unique set of requirements: a blend of both graphics and compute, certification, stability and others. The document covers specific software requirements and processes needed to use these GPUs for Single Root I/O Virtualization (SR-IOV) and machine learning tasks. The main purpose of this document is to help users utilize the RDNA™ 2 GPUs to their full potential. ```{list-table} :header-rows: 1 :stub-columns: 1 * - System Guide - Architecture reference - White papers * - [System settings](#system-settings) - [AMD RDNA 2 instruction set architecture](https://www.amd.com/system/files/TechDocs/rdna2-shader-instruction-set-architecture.pdf) - [RDNA 2 architecture](https://www.amd.com/system/files/documents/rdna2-explained-radeon-pro-W6000.pdf) ``` ## System settings This chapter reviews system settings that are required to configure the system for ROCm virtualization on RDNA2-based AMD Radeon™ PRO GPUs. Installing ROCm on Bare Metal follows the routine ROCm {doc}`installation procedure`. To enable ROCm virtualization on V620, one has to setup Single Root I/O Virtualization (SR-IOV) in the BIOS via setting found in the following ({ref}`bios-settings`). A tested configuration can be followed in ({ref}`os-settings`). :::{attention} SR-IOV is supported on V620 and unsupported on W6800. ::: (bios-settings)= ### System BIOS settings ```{list-table} Settings for the system BIOS in an ASrock platform. :header-rows: 1 :name: v620-bios * - Advanced / North Bridge Configuration - IOMMU - Enabled - Input-output Memory Management Unit * - Advanced / North Bridge Configuration - ACS Enable - Enabled - Access Control Service * - Advanced / PCIe/PCI/PnP Configuration - SR-IOV Support - Enabled - Single Root I/O Virtualization * - Advanced / ACPI settings - PCI AER Support - Enabled - Advanced Error Reporting ``` To set up the host, update SBIOS to version 1.2a. (os-settings)= ### Operating system settings ```{list-table} System Configuration Prerequisites :header-rows: 1 :name: v620-prereq * - Server - [SMC 4124](https://www.supermicro.com/en/Aplus/system/4U/4124/AS-4124GS-TNR.cfm) [AS -4124GS-TNR] * - Host OS - Ubuntu 20.04.3 LTS * - Host Kernel - 5.4.0-97-generic * - CPU - AMD EPYC 7552 48-Core Processor * - GPU - RDNA2 V620 (D603GLXE) * - SBIOS - Version SMC_r_1.2a * - VBIOS - 113-D603GLXE-077 * - Guest OS 1 - Ubuntu 20.04.5 LTS * - Guest OS 2 - RHEL 9.0 * - GIM Driver - gim-dkms_1.0.0.1234577_all * - VM CPU Cores - 32 * - VM RAM - 64 GB ``` Install the following Kernel-based Virtual Machine (KVM) Hypervisor packages: ```shell sudo apt-get -y install qemu-kvm qemu-utils bridge-utils virt-manager gir1.2-spiceclientgtk* gir1.2-spice-client-gtk* libvirt-daemon-system dnsmasq-base sudo virsh net-start default /*to enable Virtual network by default ``` Enable input-output memory management unit (IOMMU) in GRUB settings by adding the following line to `/etc/default/grub`: ```none GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" for AMD CPU ``` Update grub and reboot ```shell sudo update=grub sudo reboot ``` Install the GPU-IOV Module (GIM, where IOV is I/O Virtualization) driver and follow the steps below.z ```shell sudo dpkg -i sudo reboot # Load Host Driver to Create 1VF sudo modprobe gim vf_num=1 # Note: If GIM driver loaded successfully, we could see "gim info:(gim_init:213) *****Running GIM*****" in dmesg lspci -d 1002: ``` Which should output something like: ```none 01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Device 1478 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Device 1479 03:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 73a1 03:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 73ae → VF ``` ### Guest OS installation First, assign GPU virtual function (VF) to VM using the following steps. 1. Shut down the VM. 2. Run `virt-manager` 3. In the **Virtual Machine Manager** GUI, select the **VM** and click **Open**. ![Virtual Machine Manager](../../data/how-to/tuning-guides/tuning014.png "Virtual Machine Manager") 4. In the VM GUI, go to **Show Virtual Hardware Details > Add Hardware** to configure hardware. ![Show virtual hardware details](../../data/how-to/tuning-guides/tuning015.png "Virtual Machine Manager: show virtual hardware details") 5. Go to **Add Hardware > PCI Host Device > VF** and click **Finish**. ![VF Selection](../../data/how-to/tuning-guides/tuning016.png "VF Selection") Then start the VM. Finally install ROCm on the virtual machine (VM). For detailed instructions, refer to the {doc}`Linux install guide`. --- --- myst: html_meta: "description": "Start building for HPC and AI with the performance-first AMD ROCm software stack. Explore how-to guides and reference docs." "keywords": "Radeon, open, compute, platform, install, how, conceptual, reference, home, docs" --- # AMD ROCm documentation ROCm is an open-source software platform optimized to extract HPC and AI workload performance from AMD Instinct GPUs and AMD Radeon GPUs while maintaining compatibility with industry software frameworks. For more information, see [What is ROCm?](./what-is-rocm.rst) ROCm supports multiple programming languages and programming interfaces such as {doc}`HIP (Heterogeneous-Compute Interface for Portability)`, OpenCL, and OpenMP, as explained in the [Programming guide](./how-to/programming_guide.rst). If you're using AMD Radeon GPUs or Ryzen APUs in a workstation setting with a display connected, review {doc}`ROCm on Radeon and Ryzen documentation`. ROCm documentation is organized into the following categories: ::::{grid} 1 2 2 2 :gutter: 3 :class-container: rocm-doc-grid :::{grid-item-card} Install :class-body: rocm-card-banner rocm-hue-2 * {doc}`ROCm on Linux ` * {doc}`HIP SDK on Windows ` * {doc}`ROCm on Radeon and Ryzen` * {doc}`Deep learning frameworks ` * {doc}`Build from source ` ::: :::{grid-item-card} How to :class-body: rocm-card-banner rocm-hue-12 * [Use ROCm for AI](./how-to/rocm-for-ai/index.rst) * [AI tutorials](https://rocm.docs.amd.com/projects/ai-developer-hub/en/latest/) * [Use ROCm for HPC](./how-to/rocm-for-hpc/index.rst) * [System optimization](./how-to/system-optimization/index.rst) * [AMD Instinct MI300X performance validation and tuning](./how-to/tuning-guides/mi300x/index.rst) * [System debugging](./how-to/system-debugging.md) * [Use advanced compiler features](./conceptual/compiler-topics.md) * [Set the number of CUs](./how-to/setting-cus) * [Troubleshoot BAR access limitation](./how-to/Bar-Memory.rst) * [ROCm examples](https://github.com/amd/rocm-examples) ::: :::{grid-item-card} Conceptual :class-body: rocm-card-banner rocm-hue-8 * [GPU architecture overview](./conceptual/gpu-arch.md) * [File structure (Linux FHS)](./conceptual/file-reorg.md) * [GPU isolation techniques](./conceptual/gpu-isolation.md) * [Using CMake](./conceptual/cmake-packages.rst) * [Inception v3 with PyTorch](./conceptual/ai-pytorch-inception.md) ::: :::{grid-item-card} Reference :class-body: rocm-card-banner rocm-hue-6 * [ROCm libraries](./reference/api-libraries.md) * [ROCm tools, compilers, and runtimes](./reference/rocm-tools.md) * [GPU hardware specifications](./reference/gpu-arch-specs.rst) * [Hardware atomics operation support](./reference/gpu-atomics-operation.rst) * [Environment variables](./reference/env-variables.rst) * [Data types and precision support](./reference/precision-support.rst) * [Graph safe support](./reference/graph-safe-support.rst) ::: :::: --- # ROCm libraries ::::{grid} 1 2 2 2 :gutter: 3 :class-container: rocm-doc-grid :::{grid-item-card} Machine Learning and Computer Vision :class-body: rocm-card-banner rocm-hue-3 (artificial-intelligence-apis)= * {doc}`Composable Kernel ` * {doc}`MIGraphX ` * {doc}`MIOpen ` * {doc}`MIVisionX ` * {doc}`rocAL ` * {doc}`rocDecode ` * {doc}`rocPyDecode ` * {doc}`rocJPEG ` * {doc}`ROCm Performance Primitives (RPP) ` ::: :::{grid-item-card} Primitives :class-body: rocm-card-banner rocm-hue-12 (cpp-primitives)= * {doc}`hipCUB ` * {doc}`hipTensor ` * {doc}`rocPRIM ` * {doc}`rocThrust ` ::: :::{grid-item-card} Communication :class-body: rocm-card-banner rocm-hue-7 (communication-libraries)= * {doc}`RCCL ` * {doc}`rocSHMEM ` ::: :::{grid-item-card} Math :class-body: rocm-card-banner rocm-hue-6 (math-apis)= * [half](https://github.com/ROCm/half) * {doc}`hipBLAS ` / {doc}`rocBLAS ` * {doc}`hipBLASLt ` * {doc}`hipFFT ` / {doc}`rocFFT ` * {doc}`hipfort ` * {doc}`hipRAND ` / {doc}`rocRAND ` * {doc}`hipSOLVER ` / {doc}`rocSOLVER ` * {doc}`hipSPARSE ` / {doc}`rocSPARSE ` * {doc}`hipSPARSELt ` * {doc}`rocALUTION ` * {doc}`rocWMMA ` * {doc}`Tensile ` ::: :::: --- # ROCm tools, compilers, and runtimes ::::{grid} 1 2 2 2 :gutter: 3 :class-container: rocm-doc-grid :::{grid-item-card} System Management :class-body: rocm-card-banner rocm-hue-1 (system-tools)= * {doc}`AMD SMI ` * {doc}`ROCm Data Center Tool ` * {doc}`rocminfo ` * {doc}`ROCm SMI ` * {doc}`ROCm Validation Suite ` ::: :::{grid-item-card} Performance :class-body: rocm-card-banner rocm-hue-6 (performance-tools)= * {doc}`ROCm Bandwidth Test ` * {doc}`ROCm Compute Profiler ` * {doc}`ROCm Systems Profiler ` * {doc}`ROCProfiler ` * {doc}`ROCprofiler-SDK ` * {doc}`ROCTracer ` ::: :::{grid-item-card} Development :class-body: rocm-card-banner rocm-hue-1 (development-tools)= * {doc}`ROCm CMake ` * {doc}`HIPIFY ` * {doc}`ROCdbgapi ` * {doc}`ROCm Debugger (ROCgdb) ` * {doc}`ROCr Debug Agent ` ::: :::{grid-item-card} Compilers :class-body: rocm-card-banner rocm-hue-8 (compilers)= * {doc}`ROCm Compilers ` * {doc}`HIPCC ` * [FLANG](https://github.com/ROCm/flang/) ::: :::{grid-item-card} Runtimes :class-body: rocm-card-banner rocm-hue-12 (runtimes)= * {doc}`AMD Compute Language Runtime (CLR) ` * {doc}`HIP ` * {doc}`ROCR-Runtime ` ::: :::: --- :orphan: # ROCm release history | Version | Release date | | ------- | ------------ | | [7.1.1](https://rocm.docs.amd.com/en/docs-7.1.1/) | November 26, 2025 | | [7.1.0](https://rocm.docs.amd.com/en/docs-7.1.0/) | October 30, 2025 | | [7.0.2](https://rocm.docs.amd.com/en/docs-7.0.2/) | October 10, 2025 | | [7.0.1](https://rocm.docs.amd.com/en/docs-7.0.1/) | September 17, 2025 | | [7.0.0](https://rocm.docs.amd.com/en/docs-7.0.0/) | September 16, 2025 | | [6.4.3](https://rocm.docs.amd.com/en/docs-6.4.3/) | August 7, 2025 | | [6.4.2](https://rocm.docs.amd.com/en/docs-6.4.2/) | July 21, 2025 | | [6.4.1](https://rocm.docs.amd.com/en/docs-6.4.1/) | May 21, 2025 | | [6.4.0](https://rocm.docs.amd.com/en/docs-6.4.0/) | April 11, 2025 | | [6.3.3](https://rocm.docs.amd.com/en/docs-6.3.3/) | February 19, 2025 | | [6.3.2](https://rocm.docs.amd.com/en/docs-6.3.2/) | January 28, 2025 | | [6.3.1](https://rocm.docs.amd.com/en/docs-6.3.1/) | December 20, 2024 | | [6.3.0](https://rocm.docs.amd.com/en/docs-6.3.0/) | December 3, 2024 | | [6.2.4](https://rocm.docs.amd.com/en/docs-6.2.4/) | November 6, 2024 | | [6.2.2](https://rocm.docs.amd.com/en/docs-6.2.2/) | September 27, 2024 | | [6.2.1](https://rocm.docs.amd.com/en/docs-6.2.1/) | September 20, 2024 | | [6.2.0](https://rocm.docs.amd.com/en/docs-6.2.0/) | August 2, 2024 | | [6.1.5](https://rocm.docs.amd.com/en/docs-6.1.5/) | March 13, 2025 | | [6.1.2](https://rocm.docs.amd.com/en/docs-6.1.2/) | June 4, 2024 | | [6.1.1](https://rocm.docs.amd.com/en/docs-6.1.1/) | May 8, 2024 | | [6.1.0](https://rocm.docs.amd.com/en/docs-6.1.0/) | Apr 16, 2024 | | [6.0.2](https://rocm.docs.amd.com/en/docs-6.0.2/) | Jan 31, 2024 | | [6.0.0](https://rocm.docs.amd.com/en/docs-6.0.0/) | Dec 15, 2023 | | [5.7.1](https://rocm.docs.amd.com/en/docs-5.7.1/) | Oct 13, 2023 | | [5.7.0](https://rocm.docs.amd.com/en/docs-5.7.0/) | Sep 15, 2023 | | [5.6.1](https://rocm.docs.amd.com/en/docs-5.6.1/) | Aug 29, 2023 | | [5.6.0](https://rocm.docs.amd.com/en/docs-5.6.0/) | Jun 28, 2023 | | [5.5.1](https://rocm.docs.amd.com/en/docs-5.5.1/) | May 24, 2023 | | [5.5.0](https://rocm.docs.amd.com/en/docs-5.5.0/) | May 1, 2023 | | [5.4.3](https://rocm.docs.amd.com/en/docs-5.4.3/) | Feb 7, 2023 | | [5.4.2](https://rocm.docs.amd.com/en/docs-5.4.2/) | Jan 13, 2023 | | [5.4.1](https://rocm.docs.amd.com/en/docs-5.4.1/) | Dec 15, 2022 | | [5.4.0](https://rocm.docs.amd.com/en/docs-5.4.0/) | Nov 30, 2022 | | [5.3.3](https://rocm.docs.amd.com/en/docs-5.3.3/) | Nov 17, 2022 | | [5.3.2](https://rocm.docs.amd.com/en/docs-5.3.2/) | Nov 9, 2022 | | [5.3.0](https://rocm.docs.amd.com/en/docs-5.3.0/) | Oct 4, 2022 | | [5.2.3](https://rocm.docs.amd.com/en/docs-5.2.3/) | Aug 18, 2022 | | [5.2.1](https://rocm.docs.amd.com/en/docs-5.2.1/) | Jul 21, 2022 | | [5.2.0](https://rocm.docs.amd.com/en/docs-5.2.0/) | Jun 28, 2022 | | [5.1.3](https://rocm.docs.amd.com/en/docs-5.1.3/) | May 20, 2022 | | [5.1.1](https://rocm.docs.amd.com/en/docs-5.1.1/) | Apr 8, 2022 | | [5.1.0](https://rocm.docs.amd.com/en/docs-5.1.0/) | Mar 30, 2022 | | [5.0.2](https://rocm.docs.amd.com/en/docs-5.0.2/) | Mar 4, 2022 | | [5.0.1](https://rocm.docs.amd.com/en/docs-5.0.1/) | Feb 16, 2022 | | [5.0.0](https://rocm.docs.amd.com/en/docs-5.0.0/) | Feb 9, 2022 |