Grade "A+" Accredited by NAAC with a CGPA of 3.46
Grade "A+" Accredited by NAAC with a CGPA of 3.46

Deep Learning

Course ID
BHCS 18B
Level
Undergraduate
Program
B.Sc. CS (Hons.)
Semester
Sixth
Credits
6.0
Paper Type
DSE - 4
Method
Lecture & Practical

Unique Paper Code: Update Awaited

The objective of this course is to introduce students to deep learning algorithms and their applications in order to solve real problems.

Learning Outcomes:

At the end of the course, students should be able to:

  • Describe the feed-forward and deep networks.
  • Design single and multi-layer feed-forward deep networks and tune various hyper-parameters.
  • Implement deep neural networks to solve a problem.
  • Analyse performance of deep networks.

Course Contents

Unit 1
Unit 2
Unit 3
Unit 4
Unit 5
Unit 6

Unit 1

Introduction: Historical context and motivation for deep learning; basic supervised classification task, optimizing logistic classifier using gradient descent, stochastic gradient descent, momentum, and adaptive sub-gradient method.

Unit 2

Neural Networks: Feedforward neural networks, deep networks, regularizing a deep network, model exploration, and hyper parameter tuning.

Unit 3

Convolution Neural Networks: Introduction to convolution neural networks: stacking, striding and pooling, applications like image, and text classification.

Unit 4

Sequence Modeling: Recurrent Nets: Unfolding computational graphs, recurrent neural networks (RNNs), bidirectional RNNs, encoder-decoder sequence to sequence architectures, deep recurrent networks, LSTM networks.

Unit 5

Autoencoders: Undercomplete autoencoders, regularized autoencoders, sparse autoencoders, denoising autoencoders, representational power, layer, size, and depth of autoencoders, stochastic encoders and decoders.

Unit 6

Structuring Machine Learning Projects: Orthogonalization, evaluation metrics, train/dev/test distributions, size of the dev and test sets, cleaning up incorrectly labeled data, bias and variance with mismatched data distributions, transfer learning, multi-task learning.

Practicals

Lab List 1

  1. Implement logistic regression classification with (a) gradient descent and (b) stochastic gradient descent method. Plot cost function over iteration.
  2. Experiment with logistic regression by adding momentum term, and adaptive sub- gradient method
  3. Write the code to learn weights of a perceptron for Boolean functions (NOT, OR, AND, NOR, and NAND).
  4. Implement a feed-forward neural network for solving (a) regression and (b) 2-class classification problem. Also experiment with hyper-parameter tuning.
  5. Train and test a feed-forward neural network for multi-class classification using softmax layer as output.
  6. Create a 2D and 3D CNN for image classification. Experiment with different depth of network, striding and pooling values.
  7. Implement (a) RNN for image classification, (b) GRU network and (c) Implement LSTM networks
  8. Implement an auto-encoder, denoising autoencoders and sparse autoencoders.
  9. Design a stochastic encoders and decoders.

Additional Information

Text Books


Bunduma, N. (2017). Fundamentals of Deep Learning. O’reilly Books.
Heaton, J.(2015). Deep Learning and Neural Networks, Heaton Research Inc.

Additional Resources


Goodfellow, I. (2016). Deep Learning. MIT Press.
Deng, L., & Yu, D. (2009). Deep Learning: Methods and Applications (Foundations and Trends in Signal Processing). Publishers Inc.
Hall, M.L, (2011). Deep Learning. VDM Verlag

Teaching Learning Process


Use of ICT tools in conjunction with traditional class room teaching methods
Interactive sessions
Class discussions

Assessment Methods

Written tests, assignments, quizzes, presentations as announced by the instructor in the class

Keywords

Convolution Neural Networks, Recurrent nets, autoencoders.

Disclaimer: Details on this page are subject to change as per University of Delhi guidelines. For latest update in this regard please refer to the University of Delhi website here.