COMP 137 Deep Neural Networks

Instructor

Liping Liu

Class times and location

MW 10:30-11:45

Office hours:

TBD

Description & Objective:

Deep neural networks are tremendously successful in numerous applications especially when the data is complex and large in scale. In this course, we will talk about typical deep network architectures and techniques of training these models. We will focus on the following topics

  1. Feedforward neural network
  2. Convolutional neural network: convolutional, non-linear, pooling, and batch normalization layers; CV applications
  3. Recurrent neural network: vanilla RNN, LSTM, GRU; NLP applications
  4. Optimization: stochastic optimization, practical issues like gradient vanishing and explosion
  5. Regularization: regularization with norms, dropout, data augmentation
  6. Computation: back-propagation, packages like tensorflow and keras

After this course, a successful student should acquire the following abilities for a learning problem: 1) deciding whether deep learning is appropriate, 2) identifying the appropriate type of neural networks, 3) implementing neural networks with existing packages, and 4) training neural networks correctly.

Materials:

Book: deep learning book. Goodfellow et al. MIT Press. 2016. Similar courses:

Reading: The first part of this course will be based on ``Deep Learning’’ by Goodfellow et al.. The book is easy to read. We do not have time to cover all materials in this book, but these sections will be very useful in this course.

Course Work and Grading Policy

Collaboration: Discussions are highly encouraged, but all work need to be completed by individuals or teams independently. You can communicate your ideas verbally or by handwritten notes, but you cannot share your code or report with each other. If you need to use codes from online resources, you need to download corresponding packages or files and import the functions or classes you want to use. You need to clearly acknowledge the usage of these resources.

Late submissions: Every student get 3 free tickets representing 3 extra days you can spend on your projects. If all tickets are used up, a late submission gets its points discounted by 50% if it is within 24 hours after the deadline and zero points if it is later. If a group project is late, then the rule falls to all group members, and everyone’s share is calculated separately.

Prerequisites:

Comp 135 Introduction to Machine Learning.

Academic Integrity Policy:

On assignments: you must work out the details of each solution and code/write it out on your own. You may verbally discuss the problems and general ideas about their solutions with other students, but you CANNOT show and copy written or typed solutions from others. You may consult other textbooks or existing content on the web, but you CANNOT ask for answers through any question answering websites like (but not limited to) Quora, StackOverflow, etc.. If you see some material having the same problem and providing a solution, you CANNOT check or copy the solution provided.

On the final project: each team needs to work out the project on its own. The team members should try their best balance the work between the two team members. If any code is from a third-party, the code needs to be wrapped in a function or package and labled as third-party.

This course will strictly follow the Academic Integrity Policy of Tufts University. For any issues not covered above, please refer to the Academic Integrity Policy at Tufts.

Accessibility:

Tufts and the instructor of COMP 137 in 2020 Fall strive to create a learning environment that is welcoming students of all backgrounds. Please see the detailed accessibility policy at Tufts.