Objective
This course will focus on tools and techniques in modern algorithm design (several of them from analysis and optimization), and their applications to the design of fast algorithms for fundamental problems in theoretical computer science, and numerical linear algebra.
Logistics
Where: YINS Seminar Room 335, 17 Hillhouse Ave When: MW 2:30-3:45pm Instructor: Sushant Sachdeva Office hours: By appointment
Prerequisites
A good understanding of undergraduate algorithms (CS 365 or equivalent), Linear algebra (Math 222/225), and some discrete probability, or with permission of the instructor.
Syllabus
Here is a tentative list of topics we plan to cover in about the first two-thirds of the course: * Concentration bounds * Linear/Semidefinite programming, and duality * Applications of convex programming to approximation algorithms * Basic techniques in optimization, such as Gradient descent * Solving systems of linear equations efficiently * Fast solutions of flow problems * Low dimensional embeddings In the remaining part of the course, the students will read and present research papers related to the above topics.
Notes
Click on the topic for scribe notes and additional readings
-
14 Jan
[Notes] Scribe: Cyril Zhang Concentration bounds allow us to show that a random variable, under certain conditions, lies near its mean with high probability. In this lecture, we proved the inequalities of Markov, Chebyshev, and Chernoff, and used these to analyze a median-of-means amplification of Morris' approximate counting algorithm.Additional readings: Original paper by Morris.
-
16 Jan
[Notes] Scribe: Xiao Shi In this lecture, we talked of dimension reduction. We proved the Johnson Lindenstrauss lemma first using Gaussian vectors, and then showed how to extend the proof to uniformly random subspaces. -
21/26 Jan
[Notes] Scribe: Rasmus Kyng In this lecture, we introduced linear programming, and how it can be used to model algorithmic questions. We also used linear programming to design approximation algorithms for Minimum Vertex Cover and Max-SAT. -
28 Jan
[Notes] Scribe: Charles Jin In this lecture, we introduced the dual of a linear programming, and proved strong duality using Farkas' lemma. -
2 Feb
[Notes] Scribe: Alex Reinking In this lecture, we introduced the Max-Cut problem. We defined positive semidefinite matrices, and semidefinite programs (SDP). We interpreted SDPs as vector relaxations, and gave the Goemens-Williamson vector relaxation for Max-Cut. -
4 Feb
[Notes] Scribe: Alex Reinking In this lecture, we completed the analysis for the randomized hyperplane rounding for the MaxCut SDP. We then introduced the Balanced Separater problem. We presented the idea of strengthening relaxations, and gave a strengthened SDP for Balanced Separator using triangle inequalities. -
9 Feb
[Notes] Scribe: Aaron Shim In this lecture, we presented a \(\log n\) approximation to the Balanced Separater problem using the ARV SDP with triangle inequality constraints. -
11 Feb
[Notes] Scribe: Benjamin Peterson We presented the notion of a general convex program, and introduced the Lagrange dual of a convex program. We saw how weak duality and complementary slackness extend to convex programs. We mentioned Slater's condition for strong duality. -
16 Feb
We proved a special case of Fritz John's theorem. We showed that for the maximum volume ellipsoid contained inside a symmetric polytope approximates the polytope up to a factor of $\sqrt{n}.$ The proof used strong duality for general convex programs. -
23/25 Feb
[Notes] Scribe: Rachel Lawrence We introduced iterative methods to solve linear systems: First, we used Gradient descent to solve linear systems. Then, we presented the Conjugate Gradient method that improves over Gradient Descent. -
2/4 Mar
[Notes] Scribe: Sam Anklesaria We introduced Laplacian systems, and proceeded to show how we can solve them in near-linear time using low-stretch spanning trees. -
23 Mar
[Notes] Scribe: Alex Reinking We presented a construction of low-stretch spanning trees for unweighted graphs from the result by Elkin, Emek, Spielman, and Teng
Class presentations
-
25 Mar [Notes] We discussed how \(\ell_1\) minimization and the restricted isometry property for matrices can be used for sparse recovery of underdetermined linear systems even the presence of noise.
-
30 Mar [Notes] We discussed a result of Fakcharoenphol, Rao, and Talwar that shows that any metric on n points can be embedded into a distribution over tree metics with distortion \(\log n\).
-
1 Apr [Notes] We discussed an SDP algorithm for coloring 3-colorable graphs due to Karger, Motwani, and Sudan.
-
6 Apr [Notes] We discussed the constructive proof of Lovasz local lemma due to Moser.
-
8 Apr [Notes] We studied and analysed the power method and the Lanczos method for estimating the largest eigenvalue of a given matrix.
-
13 Apr [Notes] We discussed how matrix multiplicative weights method can be used for developing fast SDP algorithms.
-
13 Apr [Notes] We discussed a simple randomized algorithm due to Goel, Kapralov, and Khanna that finds perfect matchings in regular bipartite graphs in \(O(n \log n)\) time.
-
20 Apr [Notes] We discussed a result due to Hsu and Kakade, that allows us to learn Gaussian mixtures using 3-dimensional tensors.
Principle readings
There will be no textbook for the course, but we will use several online resources as reading material, in addition to the scribe notes. Some of the relevant courses/resources are: * Course notes from Advanced Algorithms by Sanjeev Arora * Convex Optimization by Boyd and Vanderberghe * The design of approximation algorithms by D. Williamson, D. Shmoys * Concentration of Measure for the Analysis of Randomised Algorithms by D.Dubhashi, A.Panconesi * Faster Algorithms via Approximation Theory by S. Sachdeva, N. Vishnoi * Lx=b by N. Vishnoi * Course notes from 'An Algorithmist's toolkit' by Jonathan Kelner