COMP 150 Deep Neural Network
Instructor
Class times and location
MW 4:30-5:45pm, Bromfield-Pearson 002
Office hours:
W 2-3pm, M 1-2pm at Halligan 234
Description & Objective:
Deep neural networks are tremendously successful in numerous applications especially when the data is complex and large in scale. In this course, we will talk about typical deep network architectures and techniques of training these models. We will focus on the following topics
- Feedforward neural network
- Convolutional neural network: convolutional, non-linear, pooling, and batch normalization layers; CV applications
- Recurrent neural network: vanilla RNN, LSTM, GRU; NLP applications
- Optimization: stochastic optimization, practical issues like gradient vanishing and explosion
- Regularization: regularization with norms, dropout, data augmentation
- Computation: back-propagation, packages like tensorflow and keras
After this course, a successful student should acquire the following abilities for a learning problem: 1) deciding whether deep learning is appropriate, 2) identifying the appropriate type of neural networks, 3) implementing neural networks with existing packages, and 4) training the neural networks correctly.
Materials:
Book: deep learning book. Goodfellow et al. MIT Press. 2016. Similar courses:
- Convolutional Neural Networks for Visual Recognition at Stanford.
- Natural Language Processing with Deep Learning at Stanford.
- Introduction to Deep Learning at MIT.
Course Work and Grading Policy
-
In-class quizzes (5%): there are three to five in-class quizzes scheduled at random dates. The purposes are encouraging attendance and collecting feedback.
- Participation (2%):
- participate class discussion (1%): the instructor will take notes at students' questions and monitor class discussions.
- participate piazza discussion (1%): the top 10 piazza contributors get 1%. Other students get credit in proportional to the 10th person.
- Assignments (40%):
- Assignment 1 (8%): setting up the programming environment; implementing of a simple neural network
- Assignment 2 (16%): implementing a convolutional neural network
- Assignment 3 (16%): implementing a recurrent neural network
- Final project (53%):
- Project proposal (10%): Students are encouraged to form teams to work on a problem as the final project. A team can have at most two students. A team needs to first write a proposal, which includes problem description, the dataset, the plan, and a review of current methods.
- Project implementation and report (38%): The team needs to excute the plan for the proposed problem and write a report. The report should take the format of research paper.
- Project presentation (5%): The team needs to present the project to the entire class.
Prerequisites:
Comp 135 Introduction to Machine Learning.
Academic Integrity Policy:
On assignments: you must work out the details of each solution and code/write it out on your own. You may verbally discuss the problems and general ideas about their solutions with other students, but you CANNOT show and copy written or typed solutions from others. You may consult other textbooks or existing content on the web, but you CANNOT ask for answers through any question answering websites like (but not limited to) Quora, StackOverflow, etc.. If you see some material having the same problem and providing a solution, you CANNOT check or copy the solution provided.
On the final project: each team needs to work out the project on its own. The team members should try their best balance the work between the two team members. If any code is from a third-party, the code needs to be wrapped in a function or package and labled as third-party.
This course will strictly follow the Academic Integrity Policy of Tufts University. For any issues not covered above, please refer to the Academic Integrity Policy at Tufts.
Accessibility:
Tufts and the instructor of COMP 135 in 2018 Spring strive to create a learning environment that is welcoming students of all backgrounds. Please see the detailed accessibility policy at Tufts.