Course Information
Course Overview
Master Deep Learning by building a PyTorch-like framework with NumPy: Autograd Engine, MLP, CNN & RNN.
Welcome to Python: Write Your Own Deep Learning Framework From Scratch.
This course teaches you how to build a simple, PyTorch-like deep learning framework from scratch. It covers the core mechanics of automatic differentiation and neural network abstractions. In this course, I will take you through the process of building a modular working system step by step, using only Python and NumPy.
The first part of the course teaches all you need to know (computation graphs, backpropagation logic, gradient checking, etc.) before you can build a functional autograd engine. In this part, we start with scalar-valued variables and move on to handling complex logic, such as dealing with the same inputs and advanced operators. You will learn how to automate the chain rule and verify your engine’s accuracy.
The second part of the course teaches you how to transition from scalars to tensors. You will learn how to implement broadcasting, matrix multiplication, and shape manipulation. We will then restructure our code into a modular framework called NanoTorch. By the end of this part, you will implement essential framework components like Datasets, DataLoaders, and Optimizers to train models on the real-world MNIST dataset.
The final part of the course focuses on implementing core neural network architectures. We will deep-dive into Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). You will see how to implement the im2col algorithm for efficient convolution and handle sequential data for time-series tasks. Ultimately, we will write fully functional CNN and RNN architectures from the ground up, ensuring an in-depth understanding of these powerful models.
In this course you will learn:
How to write a deep learning framework using pure Python and NumPy code.
How to build a functional Autograd Engine from scratch.
Be able to implement core classes like Variable, Function, and Module.
Be able to build a tensor engine that supports broadcasting and matrix operations.
How to implement activation functions like ReLU, Sigmoid, and Softmax.
How to build a Data Pipeline including Dataset and DataLoader for mini-batch training.
Be able to implement Optimizers like Stochastic Gradient Descent (SGD).
How to train and evaluate models on the MNIST dataset.
Be able to understand the im2col algorithm for convolutions.
How to implement Convolutional Neural Networks (CNN) from the ground up.
How to implement Recurrent Neural Networks (RNN) from the ground up.
How to develop Sequential model support for Recurrent Neural Networks (RNN).
At the end of the course, you should be able to develop your own deep learning framework and understand the low-level mechanics of deep learning structures.
Course Content
- 12 section(s)
- 90 lecture(s)
- Section 1 Introduction
- Section 2 Setup and Installation
- Section 3 Building a Scalar-Valued Autograd Engine From Scratch: The Core Architecture
- Section 4 Building a Scalar-Valued Autograd Engine From Scratch: Advanced Logic & Operator
- Section 5 Building a Full-Featured Autograd Engine From Scratch: From Scalar to Tensors
- Section 6 Neural Network Implementation: Building Modules and Optimizers
- Section 7 Building Our Own Framework: NanoTorch
- Section 8 NanoTorch in Action: Data Pipelines and MNIST Training
- Section 9 NanoTorch in Action: Building Multi-Layer Perceptrons (MLP)
- Section 10 NanoTorch in Action: Building Convolutional Neural Networks (CNN)
- Section 11 NanoTorch in Action: Building Recurrent Neural Networks (RNN)
- Section 12 Conclusion
What You’ll Learn
- How to write a deep learning framework using pure Python and NumPy code., How to build a functional Autograd Engine from scratch., Be able to implement core classes like Variable, Function, and Module., Be able to build a tensor engine that supports broadcasting and matrix operations., How to implement activation functions like ReLU, Sigmoid, and Softmax., How to build a Data Pipeline including Dataset and DataLoader for mini-batch training., Be able to implement Optimizers like Stochastic Gradient Descent (SGD)., How to train and evaluate models on the MNIST dataset., How to implement Convolutional Neural Networks (CNN) from the ground up., Be able to understand the im2col algorithm for convolutions., How to implement Recurrent Neural Networks (RNN) from the ground up., How to develop Sequential model support for Recurrent Neural Networks (RNN).