New York Area Theory Day - Spring 2016
Friday, April 15, 2016
Organized by: IBM/NYU/Columbia
External Sponsorship by: Google
Davis Auditorium, 412 Schapiro (CEPSR) Building,
Columbia University, New York (Directions)
Program:
9:30 - 10:00 Coffee and bagels
10:00 - 10:15 Prof. Cliff Stein and Prof. Mihalis Yanakakis
Remembering David S. Johnson (1945-2016)
10:20 - 11:15 Prof. Mariana Raykova
Succinct Adaptive Garbled RAM
11:15 - 11:25 Short break
11:25 - 12:20 Prof. Mark Braverman
Constant-Rate Coding for Multiparty Interactive Communication
Is Impossible
12:20 - 2:00 Lunch break
2:00 - 2:55 Prof. Ran Raz
Fast Learning Requires Good Memory: A Time-Space Lower Bound
for Parity Learning
2:55 - 3:15 Coffee break
3:15 - 4:10 Dr. Cynthia Phillips
Cooperative Computing for Autonomous Data Centers
4:30 - 5:30 Follow-up social (location TBD)
For directions, please see here.
To subscribe to our mailing list, follow instructions here.
Organizers:
Alex Andoni
Yevgeniy Dodis
Krzysztof Onak
======================================================================
Abstracts
Prof. Mariana Raykova (Yale)
Succinct Adaptive Garbled RAM
We show how to garble a large persistent database and then garble, one by
one, a sequence of adaptively and adversarially chosen RAM programs that
query and modify the database in arbitrary ways. Still, it is guaranteed
that the garbled database and programs reveal only the outputs of the
programs when run in sequence on the database. The runtime, space
requirements and description size of the garbled programs are
proportional only to those of the plaintext programs and the security
parameter. We assume indistinguishability obfuscation for circuits and
poly-to-one collision-resistant hash functions. The latter can be
constructed based on standard algebraic assumptions such as the hardness
of discrete log, or factoring, or finding shortest independent vectors
of lattices. In contrast, all previous garbling schemes with persistent
data were shown secure only in the static setting where all the programs
are known in advance.
As an immediate application, our scheme is the first to provide a way to
outsource large databases to untrusted servers, and later query and
update the database over time in a private and verifiable way, with
complexity and description size proportional to those of the unprotected
queries.
Our scheme extends the non-adaptive RAM garbling scheme of Canetti and
Holmgren [ITCS 2016]. We also define and use a new primitive, called
adaptive accumulators, which is an adaptive alternative to the positional
accumulators of Koppula et al [STOC 2015] and somewhere statistical
binding hashing of Hubacek and Wichs [ITCS 2015]. This primitive might
well be useful elsewhere.
Joint work with Ran Canetti, Yilei Chen, and Justin Holmgren.
======================================================================
Prof. Mark Braverman (Princeton)
Constant-Rate Coding for Multiparty Interactive Communication is
Impossible
We study coding schemes for multiparty interactive communication over
synchronous networks that suffer from stochastic noise, where each bit is
independently flipped with probability $\epsilon$. We analyze the minimal
overhead that must be added by the coding scheme in order to succeed in
performing the computation despite the noise.
Our main result is a lower bound on the communication of any
noise-resilient protocol over a star network with $n$-parties.
Specifically, we show a task that can be solved by communicating $T$ bits
over the noise-free network, but for which any protocol with success
probability of $1-o(1)$ must communicate at least $\Omega(T \log n /
\log \log n)$ bits when the channels are noisy. By a 1994 result of
Rajagopalan and Schulman, the slowdown we prove is the highest one can
obtain on any topology, up to a $\log \log n$ factor. We show that our
bound is tight for the star network by giving a matching coding scheme.
Joint work with Klim Efremenko, Ran Gelles, and Bernhard Haeupler.
======================================================================
Prof. Ran Raz (Weizmann)
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity
Learning
We prove that any algorithm for learning parities requires either a
memory of quadratic size or an exponential number of samples. This proves
a recent conjecture of Steinhardt, Valiant and Wager and shows that for
some learning problems a large storage space is crucial.
More formally, in the problem of parity learning, an unknown string $x
\in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn
$x$ from a stream of samples $(a_1, b_1), (a_2, b_2)... $, where each
$a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner
product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity
learning, that uses less than $n^2/25$ bits of memory, requires an
exponential number of samples.
Previously, there was no non-trivial lower bound on the number of samples
needed, for any learning problem, even if the allowed memory size is
$O(n)$ (where $n$ is the space needed to store one sample).
We also give an application of our result in the field of bounded-storage
cryptography. We show an encryption scheme that requires a private key of
length $n$, as well as time complexity of $n$ per encryption/decryption
of each bit, and is provenly and unconditionally secure as long as the
attacker uses less than $n^2/25$ memory bits and the scheme is used at
most an exponential number of times. Previous works on bounded-storage
cryptography assumed that the memory size used by the attacker is at most
linear in the time needed for encryption/decryption.
======================================================================
Dr. Cynthia Phillips (Sandia National Laboratories)
Cooperative Computing for Autonomous Data Centers
We present a new distributed model for graph computations motivated by
limited information sharing. Two autonomous entities have collected
large social graphs. They wish to compute the result of running graph
algorithms on the entire set of relationships. Because the information is
sensitive or economically valuable, they do not wish to simply combine
the information in a single location and then run standard serial or
parallel graph algorithms. We consider two models for computing the
solution to graph algorithms in this setting: 1) limited-sharing: the two
entities can share only a polylogarithmic size subgraph; 2) low-trust:
the two entities must not reveal any information beyond the query answer,
assuming they are both honest but curious. We believe this model captures
realistic constraints on cooperating autonomous data centers.
We present results for both models for s-t connectivity. In the
limited-sharing model, our results exploit social network structure to
exchange O(log^2 n) bits, overcoming polynomial lower bounds for general
graphs. In the low-trust model, our algorithm requires no cryptographic
assumptions for 3 or more centers and does not even reveal node names.
We then consider low-communication algorithms in this setting for the
planted clique problem. We conjectured a sociologically-justified
structural property of social networks with only human nodes. Given this
property, we developed an O(log^2 n)-communication algorithm for a
planted clique with log n randomly-selected nodes and adversarial edge
distribution for 2 centers. However, validation experiments on real
networks failed because these real networks contain many non-human nodes.
We describe a method for cleaning non-human nodes given only topology
and some experimental validation for the cleaning. This gives more human
networks with a new constructive property that may be useful for future
algorithms.
This is joint work with Jon Berry (Sandia National Laboratories), Michael
Collins (Christopher Newport University), Aaron Kearns (University of New
Mexico), Jared Saia (University of New Mexico), and Randy Smith (Sandia
National Laboratories).
======================================================================
|