New York Area Theory Day
Organized by: IBM/NYU/Columbia
Friday, May 10, 2013
Auditorium 109, Courant Institute of Mathematical Sciences, 251 Mercer Street, New York University
External sponsorship by: Google
The New York Area Theory Day is a semi-annual conference, aimed to bring
together people in the New York Metropolitan area for one day of interaction
and discussion. The Theory Day features several (usually 4-5) hour-long
presentations by leading theoretical computer scientists about
state-of-the-art advances in various areas. Some presentations give a
survey of the latest advances in some area, while others may concentrate on a
particular result.
The meeting is free and open to everyone; in particular, students are
encouraged to attend.
Program:
Program
09:30 - 10:00: Coffee and bagels
10:00 - 10:55: Prof. Tim Roughgarden (Stanford University)
Extension Theorems for the Price of Anarchy
10:55 - 11:05: Short break
11:05 - 12:00: Dr. Allison Lewko (Microsoft Research New England)
On the Complexity of Asynchronous Agreement Against Powerful Adversaries
12:00 - 02:00: Lunch break
02:00 - 02:55: Prof. Avi Wigderson (Institute for Advanced Study)
Grey Boxes, Population Recovery and Partial Identification
02:55 - 03:15: Coffee break
03:15 - 04:10: Dr. Ankur Moitra (Institute for Advanced Study)
An Information Complexity Approach to Extended Formulations
For directions, please see here and here.
To subscribe to our mailing list, follow instructions at
http://www.cs.nyu.edu/mailman/listinfo/theory-ny
Organizers:
Xi Chen: xichen@cs.columbia.edu
Yevgeniy Dodis: dodis@cs.nyu.edu
Tal Rabin: talr@us.ibm.com
==================================================================
Abstracts
Extension Theorems for the Price of Anarchy
Prof. Tim Roughgarden
Stanford University
The price of anarchy is a measure of the inefficiency of
selfish behavior that has been successfully analyzed in many
applications, including network routing, resource allocation, network
formation, and even models of basketball. It is defined as the
worst-case ratio between the welfare of a Nash equilibrium and
that of an optimal solution.
Pure-strategy Nash equilibria --- where each player deterministically
picks a single action --- are often easier to analyze than their more
general cousins like mixed-strategy Nash equilibria (where players can
randomize) and Bayes-Nash equilibria (where players are uncertain
about what game they're playing in). For this reason, the price of
anarchy is often analyzed, at least initially, only for a game's
pure-strategy Nash equilibria. But as much as he or she might want
to, the conscientious researcher cannot stop there --- such equilibria
might not exist, might be hard to compute, and are relevant only when
all players' preferences are common knowledge. Can we obtain price of
anarchy bounds for more general equilibrium concepts without working
any harder than we already do to analyze pure-strategy Nash equilibria?
Ideal would be an "extension theorem" that could be used in the
following ``black-box'' way: (1) prove a bound on the price of anarchy
of pure-strategy Nash equilibria of a game; (2) invoke the extension
theorem to conclude immediately that the exact same approximation
bound applies to some more general equilibrium concept. Such an
extension theorem would dodge the obvious deterrents from
generalizing price of anarchy bounds beyond pure Nash equilibria ---
no extra work, and no loss in the approximation guarantee. In this
talk, I'll describe two such extension theorems: one for the possible
outcomes generated by the repeated play of no-regret learners, and
one for games of incomplete information.
==================================================================
On the Complexity of Asynchronous Agreement Against Powerful Adversaries
Dr. Allison Lewko
Microsoft Research New England
In this talk, we will present new techniques for proving lower
bounds on the running time of randomized algorithms for distributed
consensus in the presence of mobile failures. We begin with the
observation that the early randomized algorithms for consensus
presented by Bracha and Ben-Or succeed against even stronger
adversaries than they were initially designed to resist. We then prove
that the exponential expected running time of these algorithms is an
inherent consequence of that stronger resilience. This furthers our
understanding of the kind of failures that can (or cannot) be corrected
by polynomial time randomized algorithms. This is joint work with
Mark Lewko.
==================================================================
Grey Boxes, Population Recovery and Partial Identification
Prof. Avi Wigderson
Institute for Advanced Study
The talk, like the title, will have three parts.
In the first, I will describe a new learning model we call "restriction
access". This model aims at generalizing prevailing "black-box"
access to functions when trying to learn the "device" (e.g. circuit,
decision tree, polynomial ...) which computes them. We propose a
"grey-box" access that allows certain partial views of the device,
obtained from random restrictions. The PAC-learning analog of our
model is able to efficiently learn decision trees and DNFs, which are
currently beyond reach in the standard "black-box" version of
PAC-learning.
To analyze some of these learning algorithms, we study several
natural "population recovery" problems in which an unknown distribution
over an unknown population of vectors needs to be recovered from
partial or noisy samples. Such problems naturally arise in many other
contexts in learning, clustering, statistics, data mining and database
privacy, where loss and error may be introduced by nature, inaccurate
measurements, or on purpose. We give fairly efficient algorithms to
recover the data under fairly general assumptions, when loss and
noise are close to the information theoretic limit (namely, nearly
completely obliterate the original data).
The third part introduces a new combinatorial structure we call a
partial identification (PID) graph, which underlies our algorithms.
While standard IDs are subsets of features (vector coordinates) that
uniquely identify an individual in a population, partial IDs allow
ambiguity (and "imposters"), and thus can be shorter. PID graphs
capture this imposter-structure. PID graphs yield strategies for
dimension reductions of recovery problems, and the re-assembly of
this local pieces of statistical information to a global one. The
combinatorial heart of this work is proving that every set of vectors
admits partial IDs with "cheap" PID graphs (and hence efficient
recovery). We further show how to find such near-optimal PIDs
efficiently.
Based on joint works with Zeev Dvir, Anup Rao and Amir Yehudayoff
==================================================================
An Information Complexity Approach to Extended Formulations
Dr. Ankur Moitra
Institute for Advanced Study
Often it is possible to express a given polytope $P$ as the projection
of a higher dimensional polytope $Q$ and in so doing to drastically
reduce the number of facets needed. We call $Q$ an extended
formulation for $P$. In fact, small extended formulations are known
for a number of polytopes that arise in combinatorial optimization and
yet there are many polytopes that we do not expect to have a small
extended formulation -- e.g. polytopes that capture the solutions to
hard optimization problems. Can we prove unconditional results that
these polytopes do not have small extended formulations?
Recently, there has been a revival of interest in this question. A
breakthrough result of Fiorini et al (STOC 2012) established that the
TSP polytope has no subexponential sized extended formulation.
Subsequently Braun et al (FOCS 2012) proved that the clique polytope
has no subexponential $O(n^{1/2 - \epsilon})$-approximate extended
formulation either. Here we give a new approach to these lower bounds
based on information complexity -- roughly a too-good-to-be-true
extended formulation would allow us to sample from a distribution
using too few bits of entropy, and this is the means by which we prove
our lower bounds. We use this framework to prove that the clique
polytope has no subexponential $O(n^{1 - \epsilon})$-approximate
extended formulation, thus giving an optimal and unconditional lower
bound that matches Hastad's celebrated hardness of approximation
results for clique.
This is based on joint work with Mark Braverman.
==================================================================
|