Objective

This course will focus on research topics in modern algorithm design, including tools, techniques, and their applications to the design of algorithms for fundamental problems in theoretical computer science. In this offering, the focus will be on methods from continuous optimization and analysis, and their applications to the design of fast algorithms for fundamental problems in theoretical computer science and numerical linear algebra.

Logistics

Where: BA 4010
When: T 2-4pm
Instructor: Sushant Sachdeva
Office hours: By appointment (email me, or catch me after class)

Prerequisites

A good understanding of undergraduate algorithms, Linear algebra, probability, and some vector calculus. Most of all, this course will assume good mathematical maturity as the course will be theoretical and mathematical in nature.

Syllabus

Here is a tentative list of topics we plan to cover in the course:
* Solving systems of linear equations efficiently
* Basic techniques in optimization, such as Gradient descent
* Concentration bounds
* Low dimensional embeddings
* Linear/Semidefinite programming, and duality
* Applications of convex programming to approximation algorithms
In the remaining part of the course, the students will read and present research papers related to the above topics.

Notes

Click on the topic for scribe notes and additional readings

  1. 12 Sep
    [Notes from Spring '15]
    We formulated the problem of linear regression and proved convergence bounds on solving it using Gradient decent.

    Additional readings: Section 9.1 in my survey: Faster Algorithms via Approximation Theory

  2. 19 Sep
    [Notes from Spring '15]
    We studied the Conjugate Gradient method that does better than Gradient descent by picking the best vector in the Krylov subspace.

    Additional readings: Section 9.2 in my survey: Faster Algorithms via Approximation Theory

  3. 26 Sep
    [Notes from Spring '15]
    We introduced Laplacian systems, and proceeded to show how we can solve them in near-linear time using Randomized Kaczmarz method and low-stretch spanning trees.

    Additional readings: Chapter 14 in Nisheeth Vishnoi's survey 'Lx=b' . This survey is a good source for a lot of material around Laplacian solvers. Here are notes for low-stretch spanning tree based on this paper by Elkin, Emek, Spielman, and Teng.

  4. 3/10 Oct
    [Notes from Spring '15]
    We introduced Linear programming (LP) and approximation algorithms, and used LP to design some simple approximation algorithms. Proof of the final inequality in MAX-SAT approximation.

  5. 10 Oct
    [Notes from Spring '15]
    We introduced several equivalent notions of convexity and proved Jensen's inequality and AM/GM inequality.

  6. 12 Oct
    [Notes1 from Spring '15] [Notes2 from Spring '15]
    In this lecture, we introduced the Max-Cut problem, and defined semidefinite programs (SDP). We interpreted SDPs as vector relaxations, and gave the Goemens-Williamson vector relaxation for Max-Cut, and rounded it using their randomized hyperplane rounding.

  7. 24 Oct
    [Notes1 from Spring '15] [Notes2 from Spring '15]
    In this lecture, we introduced the Balanced Separater problem. We presented the idea of strengthening relaxations, and gave a strengthened SDP for Balanced Separator (from the work of Arora-Rao-Vazirani) using triangle inequalities, and proved that it gives us a \(\log n\) approximation.

  8. 31 Oct
    [Notes from Spring '15]
    Concentration bounds allow us to show that a random variable, under certain conditions, lies near its mean with high probability. In this lecture, we proved the Markov and Chebyshev inequalities and one version of Chernoff bounds.

  9. 14 Nov
    [Notes from Spring '15]
    We proved that a random projection can be used to reduce the dimensionality of a dataset down to \(\log n\) dimensions while approximately preserving all the pariwise distances. This is called the Johnson-Lindenstrauss lemma.

  10. 21 Nov
    [Notes from Spring '15]
    We built the concept of duality for linear programs, and proved strong duality for linear programs via Farkas' lemma.

Principle readings

There will be no textbook for the course, but we will use several online resources as reading material, in addition to the scribe notes. Some of the relevant courses/resources are:
* Course notes from Advanced Algorithms by Sanjeev Arora
* Convex Optimization by Boyd and Vanderberghe
* The design of approximation algorithms by D. Williamson, D. Shmoys
* Concentration of Measure for the Analysis of Randomised Algorithms by D.Dubhashi, A.Panconesi
* Faster Algorithms via Approximation Theory by S. Sachdeva, N. Vishnoi
* Lx=b by N. Vishnoi
* Course notes from 'An Algorithmist's toolkit' by Jonathan Kelner