Course Information
- Available
- *The delivery and distribution of the certificate are subject to the policies and arrangements of the course provider.
Course Overview
Hands-on implementation with the power of TensorFlow 2.0
Transfer learning involves using a pre-trained model on a new problem. It is currently very popular in the field of Deep Learning because it enables you to train Deep Neural Networks with comparatively little data. In Transfer learning, knowledge of an already trained Machine Learning model is applied to a different but related problem.
The general idea is to use knowledge, which a model has learned from a task where a lot of labeled training data is available, in a new task where we don't have a lot of data. Instead of starting the learning process from scratch, you start from patterns that have been learned by solving a related task.
In this course, learn how to implement transfer learning to solve a different set of machine learning problems by reusing pre-trained models to train other models. Hands-on examples with transfer learning will get you started, and allow you to master how and why it is extensively used in different deep learning domains.
You will implement practical use cases of transfer learning in CNN and RNN such as using image classifiers, text classification, sentimental analysis, and much more. You'll be shown how to train models and how a pre-trained model is used to train similar untrained models in order to apply the transfer learning process even further. Allowing you to implement advanced use cases and learn how transfer learning is gaining momentum when it comes to solving real-world problems in deep learning.
By the end of this course, you will not only be able to build machine learning models, but have mastered transferring with tf.keras, TensorFlow Hub, and TensorFlow Lite tools.
About the Author
Margaret Maynard-Reid is a Google Developer Expert (GDE) for Machine Learning, contributor to the open-source ML framework TensorFlow and an author of the official TensorFlow blog. She writes tutorials and speaks at conferences about on-device ML, deep learning, computer vision, TensorFlow, and Android.
Margaret leads the Google Developer Group (GDG) Seattle and Seattle Data/Analytics/ML and is passionate about helping others get started with AI/ML. She has taught in the University of Washington Professional and Continuing Education program. For several years, she has been working with TensorFlow, and has contributed to the success of TensorFlow 2.0 by testing and organizing the Global Docs Sprint project.
Course Content
- 4 section(s)
- 13 lecture(s)
- Section 1 Image Classifier from Scratch with TensorFlow 2.0
- Section 2 Transfer Learning with tf.keras
- Section 3 Transfer Learning with TensorFlow Hub
- Section 4 TFLite Model Maker
What You’ll Learn
- Build your own image classification application using Convolutional Neural Networks and TensorFlow 2.0
- Improve any image classification system by leveraging the power of transfer learning on Convolutional Neural Networks, in only a few lines of code
- Discover how users feel about IMDB movies by building a Sentiment Analysis system utilizing the power of Recurrent Neural Networks and the TensorFlow 2.0 high-level API
- Learn how to perform transfer learning on Recurrent Neural Networks and powerfully improve any text-based system
- Learn how to use TensorFlow Hub and TensorFlow Lite to make transfer learning much easier
Reviews
-
AAditi Hazra
Good overview.
-
EEleanor Dare
One of the clearest courses on this subject, the instructor explains every line and really makes sense of it, excellent, would love to study a longer course by this teacher.
-
CCarlos Andrés Campo González
It includes relevant data probably not found in this easy package anywhere else, however there are areas of improvement. Probably a couple of even more live examples of Transfer Learning using small and mid-sized data, on several categories available in TensorFlow Hub (Vision, NLP, GANs) would be amazing! Thanks for this nonetheless.
-
DDiogo Santos
It could provide more examples on how to apply transfer learning for text problems, using several state-of-art models, and how for example to include other features on the neural network besides the text encodings or the picture pixels.