algo home

Resources for Algorithms


This page does not change much over time. It contains course notes and videos, as well as other useful information, organized by topic.


Course Topics       (not necessarily in order of schedule)

  1. Things to review before starting this class
  2. big-O notation
  3. Recurrences
  4. Sorting
  5. Hashing
  6. Deterministic Selection (includes separate notes on partitioning)
  7. Indicator Random Variables
  8. Quicksort and RandSelect
  9. Binary search trees
  10. Augmenting data structures
  11. Amortized Analysis
  12. Dynamic programming
  13. Graphs: intro, BFS, DFS, topological sort, SCC
  14. Minimum spanning trees
  15. Single-source shortest paths
  16. P vs NP


Informal summary PDF

This PDF contains some brief informal descriptions of the course material. It's useful if you're looking for a quick summary, but it only complements the class notes, it does not replace them. If you spot any errors or particularly unclear descriptions, please let me know.



A note about videos

For most topics, live lectures were recorded in the Fall 2016 and Spring 2017 semesters at Tufts University. A couple were done in Spring 2018 at NYU. Because of some minor audio issues in some of the lectures, and because I have refreshed the course notes, I have started creating new videos. At first these will be voice-over presentations. Eventually they will be replaced with new recordings of live lectures. Until then, the old videos will remain available.
If you find a particular clip to be unclear, please let me know.

If you want a supplemental video source, check out the MIT open courseware video lectures for Intro to Algorithms, starring Charles Leiserson and Erik Demaine (from 2005). This is entirely optional. They cover a few things that we don't, and vice versa. Here is the link (which seems to have subtitles too). You can also view or download the free videos (without subtitles) at the iTunes store by searching for "Introduction to Algorithms", or here.




  1. Things to review before starting this class
    1. Discrete math
    2. Data structures
      • Arrays, linked lists, stacks, queues. Also, heaps and binary search trees, but they are introduced on this page anyway.
        • You should understand that you can't access an arbitrary item in a linked list in constant time, or run binary search.
        • You should understand that you can't insert an element into the middle of an array in constant time.
    3. Basic probability
        You should know what expected value is. See here. [You can also see the intro of the IRV section below]


  2. big-O notation (aka Theta notation)
    • Prerequisite knowledge: it helps to be familiar with common functions (see 1.1 above)


  3. Recurrences
    1. The recursion tree. Using induction.
    2. The Master method
      • New class notes: full slideshow and condensed
      • New video:
      • Old class notes: full slideshow and condensed
      • Old video: My 2017 lecture. (47min). Note that it is titled "part 1", but in fact it is self-contained. "Part 2" is just Strassen's algorithm, in section 3.3.d.
      • MIT videos: Lecture 2 from 48:35 to end.
      • Book: chapter 4, p.93-96, although more intuition can be gained by reading up to p.99.
      • FYI: the Akra-Bazzi method generalizes the master method. This is beyond the scope of this course.
    3. Examples of using recurrences; recursive algorithms. (Just browse, this is not exam material)
      1. Divide and conquer, and binary search.
        • MIT videos: Lecture 3 up to 13:00
        • Book: chapter 4, p.65-67
      2. Computing the power of a number
        • MIT videos: Lecture 3 from 13:00 up to 17:40
      3. Computing Fibonacci numbers
      4. Matrix multiplication (Strassen's method)
        • My 2017 lecture. (13min.)
        • MIT videos: Lecture 3 from 33:45 to 54:40
        • Book: chapter 4, p.75-82.
      5. VLSI embedding
        • MIT videos: Lecture 3 from 55:00 to end.


  4. Sorting
    1. Insertion sort (used as motivation for big-O; see section 2)
      • MIT videos: Lecture 1, 17:20 to 1:03:20.
      • Book: chapter 2, p.16-29
    2. Merge sort (used as motivation for studying recurrences; see section 3.1)
      • MIT videos: Lecture 1, 1:03:20 to end.
      • Book: chapter 2, p.30-38
    3. Heap sort
    4. Decision trees. Sorting lower bound.
    5. Counting sort and radix sort
      • New class notes: full slideshow and condensed
      • Old video on Counting sort: (2017 lecture) First 17 minutes only
      • New video on Radix sort: Slideshow with audio
      • MIT videos: Lecture 5, from 31:20 up to 1:04:00. After that the video has a more detailed analysis of radix sort. It is also in the book, but you are not required to cover this.
      • Book: chapter 8, p.194-199.
      • Visualizations for Counting sort and Radix sort by David Galles.
    6. Other
    7. Quicksort is covered in section 8.


  5. Hashing (basics)
    1. Recap of competing data structures, and introduction
    2. Chaining
    3. Open addressing, linear and quadrating probing, double hashing, analysis with uniform hashing assumption.
    4. Advanced further reading, not part of this course: Universal hashing, Perfect hashing, Cuckoo hashing (and more about the cuckoo mafia)


  6. Deterministic Selection (aka order statistics; median-finding)
    1. Partitioning
    2. General approaches towards getting linear time algorithms (including "prune and search")
      • Prerequisite knowledge: Recurrences (by induction; section 3.1)
    3. Algorithm
      • Sections 6.1 and 6.2 are not critical for this, but they might help. You will need to understand section 3.1 though (recurrences by induction).
    4. Book: Chapter 9, p.220-222.
    5. A quote by Bernard Chazelle.


  7. Indicator Random Variables
    • Class notes: full slideshow and condensed
    • Video:
    • Book: Chapter 5
    • Examples
      1. The hat check problem.
        • FYI - Link 1 and link 2 calculate the probability of an outcome for this problem, without IRVs.
      2. The hiring problem.
        • Link. Note that the second problem in this link, involving birthdays, is mentioned below.
        • Harmonic series (related to the hiring problem).
      3. Finding local maxima in permutations.
        • Jump to the 30:00 mark in this youtube video, which is part of Harvard's Stat 110 course. This part of the lecture lasts 9 minutes. After that is an explanation of the St. Petersburg paradox which is fun to think about. Here's the wiki on that.
      4. Counting inversions in permutations. (involves IRVs with 2 subscripts)
        • Look at problem 2 in this homework set from a course at WVU. This follows the hat check problem. Problem 3 is not related to IRVs, but is interesting.
      5. A birthday problem. (involves IRVs with 2 subscripts)
        • The 2nd example in this link is a variant of our good old birthday problem. I discuss this and one more variant here.
      6. A problem with balls and bins.
        • See the second example in this link. Evaluation of the expected value of each IRV is a bit more complicated in this example than in previous ones. Note that the first example in this link is equivalent to the hat check problem. It deals with fixed points in permutations. In a permutation of the integers 1...n, a fixed point occurs when integer k is placed at position k.


  8. Quicksort and RandSelect
    1. Quicksort
      1. Introduction and short derivation of expected runtime
      2. Analysis 1: expected runtime, with more detailed analysis
      3. Analysis 2: expected runtime via expected number of comparisons
    2. RandSelect (Randomized Selection)


  9. Binary search trees
    1. Basics
    2. Building a BST randomly, average depth, and relation to Quicksort.
      • Prerequisite knowledge: Expected value.
      • New class notes: full slideshow1 (about time), full slideshow2 (about height), and all notes condensed.
      • New video: 1: Intro and expected time. 2: Expected height. (Slideshow with audio)
      • Old class notes: full slideshow and condensed.
      • Old video:
        • My 2019 lecture (28 minutes)
        • My 2016 lecture. (24min.) [Note: since making this video I have added material to the end of the corresponding course notes. The quick analysis of expected height is not in this video.]
      • MIT videos: Lecture 9, up to 22:30.
      • If you're interested in seeing the more complicated proof of why the expected height of a random BST is only 3logn, I can share course notes and a 40 minute video, just ask. There is also a version in CLRS chapter 12.4, p.299-302, but it uses certain results without proving them.
    3. Red-black trees (an example of dynamic balanced BST)


  10. Augmenting data structures
    • Prerequisite knowledge: binary search trees (section 9)
    1. Intro, dynamic rank finding and dynamic selection
    2. Range counting (almost the same as dynamic rank queries)
    3. Interval trees


  11. Amortized analysis

  12. Dynamic programming
    1. Counting paths
    2. Longest common subsequence
    3. Longest increasing subsequence
    4. Rod cutting


  13. Graphs: intro, BFS, DFS, topological sort, SCC
    1. Representation and types of graphs (and a little bit about planarity)
    2. Breadth-first search (BFS) and depth-first search (DFS)
    3. Topological sort
    4. Strongly connected components


  14. Minimum spanning trees
    • Prerequisite knowledge: graph basics, familiarity with priority queues (e.g. heaps).
    1. Intro and properties
      • Class notes: full slideshow and condensed
      • Video:
        • My 2017 lecture: Part 1, and Part 2 (until 20:22, then Kruskal's algo begins)
        • MIT videos: Lecture 16, from 23:20 up to 1:00:30.
      • Book: Chapter 23, p.624-629.
    2. Kruskal's algorithm
    3. Prim's algorithm
      • Prerequisite knowledge: Familiarity with heaps as priority queues.


  15. Single source shortest paths
    • Prerequisite knowledge: graph basics.
    1. General properties, relaxing edges.
    2. Dijkstra's algorithm
      • Prerequisite knowledge: Familiarity with heaps as priority queues.
      • New class notes, describing the algorithm without assuming knowledge of Prim: full slideshow and condensed.
      • New video (Slideshow with audio)
      • Old class notes, assuming knowledge of Prim's MST algorithm: full slideshow and condensed.
      • Old video: My 2016 lecture (7min; start at 2:00)
        [The slide with the proof has been updated since this video was made. Specifically, the initial weights used in the video are inconsistent with the inductive hypothesis. The point is still made though.]
      • MIT videos: Lecture 17, from 26:20 to 1:19:40. (the proof of correctness is far longer than what I require)
      • Book: Chapter 24, p.658-662.
      • Visualization by David Galles.
      • JavaScript demo by Kenji Ikeda.
    3. Bellman-Ford algorithm
      • Class notes: full slideshow and condensed
      • Video:
          My 2016 lecture (main lemma starts at 15:04, algorithm starts at 21:20; total 22 minutes)
        • MIT videos: Lecture 18, up to 36:00. (more rigorous than what we will do).
      • Book: Chapter 24, p.651-654.
      • xkcd 69
    4. Algorithm for DAGs
      • Note: the proof of correctness relies on the main lemma that comes before Bellman-Ford.


  16. P vs NP
    1. Intro
      • Class notes: full slideshow and condensed
      • Video: My 2016 lecture until 28:00, then continues with reductions.
      • Book: Chapter 34, p.1048-1053, and 1061-1069. However, I introduce the topic more informally, for instance, without mentioning "languages".
    2. Examples of reductions
      • Class notes: full slideshow and condensed
      • Video: My 2016 lecture part 1 (starting from 28:00, i.e., continuing from above), then part 2
      • Book: Chapter 34, from p.1070 and on. This goes into much more detail than I will. Just browse through accordingly.
    3. Other
      • Use your new-found knowledge to torment your friends.
      • Here are two papers about well-known video games that can be generalized to be really hard. __1__ __2__
      • A brief, mostly non-mathematical, introductory video about P vs NP, shown to me by a former CS-2413 student.