AMD AI Software with Radeon GPUs

Author

Reads 725

An artist’s illustration of artificial intelligence (AI). This image represents ethics research understanding human involvement in data labelling. It was created by Ariel Lu as part of the...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image represents ethics research understanding human involvement in data labelling. It was created by Ariel Lu as part of the...

AMD's AI software is designed to work seamlessly with their Radeon GPUs, unlocking a world of possibilities for developers and users alike.

Radeon GPUs have been optimized to support AMD's AI software, which enables tasks like machine learning and deep learning to run faster and more efficiently.

This combination of AMD's AI software and Radeon GPUs has significant implications for applications such as computer vision, natural language processing, and predictive analytics.

With AMD's AI software and Radeon GPUs, developers can create more complex and accurate models, leading to better results and more efficient processing.

AMD AI Software Features

The AMD Ryzen AI Software is designed to help developers get started on select laptops powered by AMD Ryzen AI.

One of the key features of this software is that it provides the resources needed to start developing.

Developers can find these resources on select laptops powered by AMD Ryzen AI.

Windows PC Experiences

AMD Ryzen AI is designed to support Windows 11 AI ecosystem, making it a great choice for those looking to unlock new AI experiences.

You can stay ahead of modern business requirements and rethink your workplace with industry-leading AI.

With AMD Ryzen AI, you can expect advanced AI experiences available on Windows PCs, such as support for Microsoft Windows Studio Effects.

Windows PC Experiences

Credit: youtube.com, New Experiences Coming to Copilot+ PCs and Windows 11

With AMD Ryzen AI, you can unlock new AI experiences on a Windows PC. These experiences are designed to support Windows 11 AI ecosystem.

AMD Ryzen AI is designed to support Windows 11 AI ecosystem, making it a great choice for modern business requirements. Industry-leading AI capabilities are at your fingertips.

Support for Microsoft Windows Studio Effects is just one example of the advanced AI experiences available on Windows PCs powered by AMD Ryzen AI. This feature can help you rethink your workplace and stay ahead of the curve.

With AI-enabled experiences, your team can do it all on a Ryzen AI-powered PC, from intelligent workflows to extreme personalization. This means you can streamline your work and achieve more with less effort.

AMD Ryzen AI developer tools provide building blocks for AI-powered applications, making it easier to create innovative solutions. This is a game-changer for developers and businesses alike.

Radeon GPUs Boost CG Software

Credit: youtube.com, 🔧 How to Optimize AMD Radeon Settings For GAMING & Performance The Ultimate GUIDE 2024 *NEW* ✅

Radeon GPUs are ideal for large AI workloads that require parallel throughput.

They're perfect for tasks that need to process a lot of data at once, like 3D modeling and animation.

This is because they can handle many calculations simultaneously, making them much faster than traditional CPUs.

In fact, they're so good at this that they're often used in professional graphics and animation software.

GPU

GPU is ideal for large AI workloads that require parallel throughput. This makes it a great choice for tasks that need to process a lot of information quickly.

AMD Ryzen AI developer tools provide building blocks for AI-powered applications, which can take advantage of the GPU's capabilities. This includes tools for tasks like machine learning and natural language processing.

If you're looking to use a GPU for AI workloads, you'll want to make sure you have the right tools and software. AMD's ROCm 6.x libraries and GPU driver have already been supported on Red Hat Enterprise Linux (RHEL) 9.x for some time.

Credit: youtube.com, New GPU-enabled Windows 365 experiences

Here are some popular AI frameworks that have recently gained support for AMD GPUs:

  • Ray, which got proper ROCm integration in version 2.30.0 on June 21, 2024.
  • Flash Attention, which recently received ROCm support in version 2.3.6 on July 25, 2024.
  • DeepSpeed, which had advertised ROCm integration but needed some fixes to skip the compilation of incompatible optimizers, released in version 0.15.1 on September 5, 2024.

Portability Over Accelerators

In the daily routine of a data scientist, what matters is experimenting with the libraries that you like to get your job done as quickly as possible.

At equivalent performance levels, it shouldn’t matter what kind of accelerators are behind the scene to run your AI/ML workloads.

The changes required to port the fine-tuning Llama 3.1 with Ray on OpenShift AI example from one accelerator to the other are minimal.

Assuming you are connected to an OpenShift cluster with AMD Instinct accelerators, you can follow the steps in the previous example and only change these three lines to configure your Ray cluster.

The runtime metrics from the AMD Instinct accelerators aren’t yet integrated into the OpenShift monitoring stack.

To view the real-time usage during fine-tuning, you can run the ROCm CLI inside one of the worker pods by opening a terminal from the OpenShift web console.

Solution Guide

Credit: youtube.com, AI PCs Deep Learning: A Practical Guide

The AMD Ryzen AI Solution Guide is a comprehensive resource that highlights the future of Windows business laptops starting with AMD Ryzen AI.

AMD Ryzen AI is a powerful solution that's part of the AMD Ryzen family, which is known for its high-performance capabilities.

The future of Windows business laptops starts with AMD Ryzen AI, as mentioned in the AMD Ryzen AI Solution Guide.

AMD Ryzen AI is a trademark of Advanced Micro Devices, Inc., and is used in conjunction with other AMD technologies like XDNA and Threadripper.

Certain AMD technologies may require third-party enablement or activation, so it's essential to confirm with the system manufacturer for specific features.

AMD Ryzen AI is a registered trademark of Microsoft Corporation in the US and/or other countries, and is used in conjunction with other trademarks like Radeon and RDNA.

Explore further: Ai Business Software

Getting Started

First, you'll need to choose your platform: AMD Adaptable SoC or Ryzen AI. For AMD Adaptable SoC, a pre-built package is provided to deploy ONNX models on embedded Linux.

Credit: youtube.com, Getting Started with Ryzen AI Software

To enable Vitis AI ONNX Runtime Execution Provider on Windows, you'll need to copy DLL files from the extracted archive to the correct directory: C:\Program Files\onnxruntime\bin.

You'll also need to leverage the scripts in the quicktest folder to test your installation. This will ensure everything is set up correctly before moving forward.

Curious to learn more? Check out: What Software Opens Ai Files

Requirements

To get started with AMD Adaptable SoC development, you'll need to meet certain requirements. AMD targets supported by the Vitis AI ONNX Runtime Execution Provider are listed in the following table.

You'll need an AMD64 architecture with a Ryzen AI family to support AMD Ryzen 7040U and 7040HS targets.

Installation

To get started with your AMD Adaptable SoC or Ryzen AI installation, you'll need to follow some specific steps.

First, make sure you have the necessary pre-requisites installed on your Windows machine. For Ryzen AI, this includes Visual Studio 2019, cmake version 3.26 or higher, Python version 3.9 or higher, and the AMD IPU driver version 10.1109.8.100.

Credit: youtube.com, Bricks - Getting Started (Installation, Settings, Editing)

To install the Vitis AI ONNX Runtime Engine, download the Ryzen AI Software Package and unzip it. Then, enter the voe-4.0-win_amd64 ONNX runtime folder.

For AMD Adaptable SoC targets, a pre-built package is provided to deploy ONNX models on embedded Linux. However, if you're using Microsoft Windows, you'll need to copy the DLL files from the voe-0.1.0-cp39-cp39-win_amd64 subdirectory of the extracted archive to C:\Program Files\onnxruntime\bin.

You'll also need to set the XLNX_VART_FIRMWARE environmental variable, which is crucial for loading the IPU with the required executable file. This involves executing a command from the Python prompt, replacing [path_to_xclbin] with the target path containing the xclbin.

To test your installation, leverage the scripts in the quicktest folder. This will help you verify that everything is working correctly.

Here are the pre-requisites for Ryzen AI installation:

  • Visual Studio = 2019
  • cmake (version >= 3.26)
  • python (version >= 3.9) (Python 3.9.13 64bit recommended)
  • AMD IPU driver = 10.1109.8.100 installed

Development and Build

To get started with AMD Ryzen AI software development, you'll want to check out the resources available for select laptops powered by AMD Ryzen AI.

You can find these resources on the AMD Ryzen AI Software for Developers page, which is a great place to begin.

To build the Ryzen AI Vitis AI ONNX Runtime Execution Provider from source, you'll need to follow the Build Instructions, which are available for reference.

Development

Credit: youtube.com, An Entire Software Development Life Cycle - Full Guide (Tutorial)

Development is where the magic happens, and AMD's Ryzen AI and Adaptable SoC targets make it easier than ever. The Vitis AI Execution Provider can ingest quantized ONNX models with INT8 datatypes.

To get started with quantization, you'll need to use either the Vitis AI Quantizer or Olive for Ryzen AI models. For AMD Adaptable SoCs, the Vitis AI Quantizer is the only option.

The Vitis AI Quantizer is a powerful tool that supports the quantization of PyTorch and TensorFlow models, including PyTorch and TensorFlow 2.x and 1.x dockers. This means you can easily convert your models to the required INT8 format.

ONNX Quantizer python wheel is also available to parse and quantize ONNX models, making it easy to integrate with the Ryzen AI Software Package. This end-to-end workflow is a huge time-saver and makes development much more efficient.

In the future, the Vitis AI ONNX Runtime Execution Provider will even support on-the-fly quantization, allowing you to deploy FP32 ONNX models directly. This is a game-changer for developers who want to get their projects up and running quickly.

Runtime Options

Credit: youtube.com, Never install locally

The Vitis AI ONNX Runtime is integrated with a compiler that compiles the model graph and weights into a micro-coded executable. This executable is deployed on the target accelerator, such as the Ryzen AI IPU or Vitis AI DPU.

The model is compiled when the ONNX Runtime session is started, and compilation must complete prior to the first inference pass. This can take a few minutes to complete.

Several runtime variables can be set to configure the inference session. These variables are not optional, and must be set to configure the session correctly.

The config file variable is required and must be set to point to the location of the configuration file, which is contained in the voe-[version]-win_amd64.zip.

Here is a list of the runtime variables that can be set:

The final cache directory is {cacheDir}/{cacheKey}.

Carrie Chambers

Senior Writer

Carrie Chambers is a seasoned blogger with years of experience in writing about a variety of topics. She is passionate about sharing her knowledge and insights with others, and her writing style is engaging, informative and thought-provoking. Carrie's blog covers a wide range of subjects, from travel and lifestyle to health and wellness.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.