- Lecture 1.1 - What is ML 什麼是機器學習
- Lecture 1.2 - What is DL 什麼是深度學習
- Lecture 1.3 - How to Apply 如何應用深度學習
- Lecture 2.1 - How to Train a Model 如何訓練模型
- Lecture 2.2 - What is Model 什麼是模型
- Lecture 2.3 - What does the Good Function Mean 什麼叫做好的Function呢
- Lecture 2.4 - How can we Pick the Best Function 如何找出最好的Function
- Lecture 2.5 - Backpropagation 效率地計算大量參數
- TA Recitation - Optimization
- Lecture 3.1 - Word Representations
- Lecture 3.2 - Language Modeling 語言模型
- Lecture 3.3 - Recurrent Neural Network 詳細解析
- Lecture 3.4 - RNN Applications RNN各式應用
- TA Recitation - Practical Tips
- Lecture 4.1 - Attention Mechanism 注意力機制
- Lecture 4.2 - Attention Applications 注意力的各式應用
- Assignment 1 Tutorial
- Lecture 5.1 - Word Representation Review 詞向量各式表示法
- Lecture 5.2 - Word2Vec 詞向量
- Lecture 5.3 - Word2Vec Training 如何訓練詞向量
- Lecture 5.4 - Negative Sampling
- Lecture 5.5 - Word2Vec Variants 各種訓練的變化方式
- Lecture 5.5 - Word2Vec Variants 各種訓練的變化方式
- Lecture 5.6 - GloVe 詞向量
- Lecture 5.7 - Word Vector Evaluation 如何評估詞向量的好壞
- Lecture 5.8 - Contextualized Word Embeddings 前後文相關之詞向量
- Lecture 5.9 - ELMo 芝麻街家族之起源
- Lecture 6.1 - Basic Attention 基本注意力模型複習
- Lecture 6.2 - Self Attention 新注意力機制
- Lecture 6.3 - Multi-Head Attention
- Lecture 6.4- Transformer
- Lecture 6.5- BERT 進擊的芝麻街巨人
- TA Recitation- More on Embeddings
- Lecture 7.1 - Transformer-XL 處理超長輸入的變形金剛
- Lecture 7.2 - XLNet 兼顧AR及AE好處的模型
- Lecture 7.3 - RoBERTa SpanBERT XLM 簡單有用的改進方法
- Lecture 7.4- ALBERT 如何讓BERT縮小卻依然好用呢
- TA Recitation - More on Transformers
- Lecture 8.1- Deep Reinforcement Learning Introduction
- Lecture 8.2- Markov Decision Process
- Lecture 8.3- Reinforcement Learning
- Lecture 8.4- Value-Based RL Approach
- Lecture 8.5- Advanced DQN
- Lecture 9.1- Policy Gradient
- Lecture 9.2- Actor Critic
- Lecture 10.1- Natural Language Generation
- Lecture 10.2- Decoding Algorithm
- Lecture 10.3- NLG Evaluation
- Lecture 10.4- RL for NLG (20-05-12)
- TA Recitation- RL for Dialogues (20-05-12)
- GAN (Quick Review)
- GAN Lecture 4 (2018)- Basic Theory
- GAN Lecture 6 (2018)- WGAN EBGAN
- Lecture 11.1- Unsupervised Learning Introduction
- Lecture 11.2- Autoencoder & Variational Autoencoder
- Lecture 11.3- Distant Supervision & Multi-Task Learning
- Lecture 12.1- Conversational AI Introduction 對話AI簡介
- Lecture 12.2- Task-Oriented Dialogues 任務型對話
- Lecture 12.3- Chit-Chat Social Bots 聊天型對話
- Lecture 13.1- Robustness 對話系統的強健性 (20-06-16)
- Lecture 13.2- Scalability 對話系統的擴展性
- Final Project- Rules & Grading
- Career Sharing 求學經驗分享
Lecture 0 2019/02/19 Course Logistics [slides]
Registration: [Google Form]
Lecture 1 2019/02/26 Introduction [slides] (video)
Guest Lecture (R103) [PyTorch Tutorial]
Lecture 2 2019/03/05 Neural Network Basics [slides] (video)
Suggested Readings:
[Linear Algebra]
[Linear Algebra Slides]
[Linear Algebra Quick Review]
A1 2019/03/05 A1: Dialogue Response Selection [A1 pages]
Lecture 3 2019/03/12 Backpropagation [slides] (video)
Word Representation [slides] (video)
Suggested Readings:
[Learning Representations]
[Vector Space Models of Semantics]
[RNNLM: Recurrent Neural Nnetwork Language Model]
[Extensions of RNNLM]
[Optimzation]
Lecture 4 2019/03/19 Recurrent Neural Network [slides] (video)
Basic Attention [slides] (video)
Suggested Readings:
[RNN for Language Understanding]
[RNN for Joint Language Understanding]
[Sequence-to-Sequence Learning]
[Neural Conversational Model]
[Neural Machine Translation with Attention]
[Summarization with Attention]
[Normalization]
A2 2019/03/19 A2: Contextual Embeddings [A2 pages]
Lecture 5 2019/03/26 Word Embeddings [slides] (video)
Contextual Embeddings - ELMo [slides] (video)
Suggested Readings:
[Estimation of Word Representations in Vector Space]
[GloVe: Global Vectors for Word Representation]
[Sequence Tagging with BiLM]
[Learned in Translation: Contextualized Word Vectors]
[ELMo: Embeddings from Language Models]
[More Embeddings]
2019/04/02 Spring Break A1 Due
Lecture 6 2019/04/09 Transformer [slides] (video)
Contextual Embeddings - BERT [slides] (video)
Gating Mechanism [slides] (video)
Suggested readings:
[Contextual Word Representations Introduction]
[Attention is all you need]
[BERT: Pre-training of Bidirectional Transformers]
[GPT: Improving Understanding by Unsupervised Learning]
[Long Short-Term Memory]
[Gated Recurrent Unit]
[More Transformer]
Lecture 7 2019/04/16 Reinforcement Learning Intro [slides] (video)
Basic Q-Learning [slides] (video)
Suggested Readings:
[Reinforcement Learning Intro]
[Stephane Ross' thesis]
[Playing Atari with Deep Reinforcement Learning]
[Deep Reinforcement Learning with Double Q-learning]
[Dueling Network Architectures for Deep Reinforcement Learning]
A3 2019/04/16 A3: RL for Game Playing [A3 pages]
Lecture 8 2019/04/23 Policy Gradient [slides] (video)
Actor-Critic (video)
More about RL [slides] (video) Suggested Readings:
[Asynchronous Methods for Deep Reinforcement Learning]
[Deterministic Policy Gradient Algorithms]
[Continuous Control with Deep Reinforcement Learning]
A2 Due
Lecture 9 2019/04/30 Generative Adversarial Networks [slides] (video)
(Lectured by Prof. Hung-Yi Lee)
Lecture 10 2019/05/07 Convolutional Neural Networks [slides]
A4 2019/05/07 A4: Drawing [A4 pages]
2019/05/14 Break A3 Due
Lecture 11 2019/05/21 Unsupervised Learning [slides]
NLP Examples [slides]
Project Plan [slides]
Special 2019/05/28 Company Workshop Registration: [Google Form]
2019/06/04 Break A4 Due
Lecture 12 2019/06/11 Project Progress Presentation
Course and Career Discussion
Special 2019/06/18 Company Workshop Registration: [Google Form]
Lecture 13 2019/06/25 Final Presentation