Udemy

AI Application Boost with RAPIDS GPU Acceleration

立即報名
  • 1,551 名學生
  • 更新於 8/2025
4.4
(115 個評分)
CTgoodjobs 嚴選優質課程,為職場人士提升競爭力。透過本站連結購買Udemy課程,本站將獲得推廣佣金,有助未來提供更多實用進修課程資訊給讀者。

課程資料

報名日期
全年招生
課程級別
學習模式
修業期
6 小時 31 分鐘
教學語言
英語
授課導師
Jones Granatyr, Gabriel Alves, AI Expert Academy
評分
4.4
(115 個評分)
1次瀏覽

課程簡介

AI Application Boost with RAPIDS GPU Acceleration

High-speed and high-performance GPU and CUDA computing! Build Data Science pipelines 50 times faster!

This course is independently developed and is not affiliated with, endorsed, or sponsored by NVIDIA Corporation. RAPIDS is an open-source project originally developed by NVIDIA.

Data science and machine learning represent the largest computational sectors in the world, where modest improvements in the accuracy of analytical models can translate into billions of impact on the bottom line. Data scientists are constantly striving to train, evaluate, iterate, and optimize models to achieve highly accurate results and exceptional performance. With NVIDIA's powerful RAPIDS platform, what used to take days can now be accomplished in a matter of minutes, making the construction and deployment of high-value models easier and more agile. In data science, additional computational power means faster and more effective insights. RAPIDS harnesses the power of NVIDIA CUDA to accelerate the entire data science model training workflow, running it on graphics processing units (GPUs).

In this course, you will learn everything you need to take your machine learning applications to the next level! Check out some of the topics that will be covered below:

  • Utilizing the cuDF, cuPy, and cuML libraries instead of Pandas, Numpy, and scikit-learn; ensuring that data is processed and machine learning algorithms are executed with high performance on the GPU.

  • Comparing the performance of classic Python libraries with RAPIDS. In some experiments conducted during the classes, we achieved acceleration rates exceeding 900x. This indicates that with certain databases and algorithms, RAPIDS can be 900 times faster!

  • Creating a complete, step-by-step machine learning project using RAPIDS, from data loading to predictions.

  • Using DASK for task parallelism on multiple GPUs or CPUs; integrated with RAPIDS for superior performance.

Throughout the course, we will use the Python programming language and the online Google Colab. This way, you don't need to have a local GPU to follow the classes, as we will use the free hardware provided by Google.

課程章節

  • 6 個章節
  • 47 堂課
  • 第 1 章 Introduction
  • 第 2 章 cuDF
  • 第 3 章 cuML
  • 第 4 章 Complete project
  • 第 5 章 DASK
  • 第 6 章 Final remarks

課程內容

  • Understand the differences between processing data using CPU and GPU
  • Use cuDF as a replacement for pandas for GPU-accelerated processing
  • Implement codes using cuDF to manipulate DataFrames
  • Use cuPy as a replacement for numpy for GPU-accelerated processing
  • Use cuML as a replacement for scikit-learn for GPU-accelerated processing
  • Implement a complete machine learning project using cuDF and cuML
  • Compare the performance of classic Python libraries that run on the CPU with RAPIDS libraries that run on the GPU
  • Implement projects with DASK for parallel and distributed processing
  • Integrate DASK with cuDF and cuML for GPU performance

評價

  • J
    Jin Xu
    5.0

    IT IS GREAT CLASS

  • Y
    Yury Khaydukov
    2.5

    Mentoring tone annoys, I am feeling like a retarded student being explained by a looks-patient-but-almost-boiling professor. Lection CPU vs GPU I didn't find any comparison of CPU vs GPU performance

  • T
    Teemo Teemojin
    4.0

    no comments so far

  • B
    Bharadwaja Choudhury
    5.0

    Simple and Clear explanation. Thank you.

立即關注瀏覽更多

本網站使用Cookies來改善您的瀏覽體驗,請確定您同意及接受我們的私隱政策使用條款才繼續瀏覽。

我已閱讀及同意