Oganizers: Young-San Lin, Simina Brânzei, Petros Drineas, Elena Grigorescu
Wed Sept 5 Greg Frederickson
Purdue
Title: Hidden in Plane Sight: the Extraordinary Vision of Ernest Irving Freese Room 3102, 11:30am-12:00 Part of the CS COLLOQUIUM Abstract: A geometric dissection is a cutting of a geometric figure into pieces that you can rearrange to form another figure. Dissections have had a surprisingly rich history, reaching back to Islamic mathematicians a millennium ago and Greek mathematicians more than two millennia ago. Following the death of Los Angeles architect Ernest Irving Freese in 1957, his precious 200-page manuscript, chock-full of gorgeous geometric dissections, “disappeared” and wasn't recovered for more than four decades. Hidden within many of its dissections are remarkable uses of plane tessellations, nifty instantiations of mathematical identities, amazing forays into the structure of regular polygons, and spectacular hingings of the dissection pieces. Based on the speaker's recent book [1], this talk will illuminate a number of Freese's lovely insights, and outgrowths.
Reference [1] Frederickson, G.N., 2017. Ernest Irving Freese's `Geometric Transformations': the Man, the Manuscript, the Magnificent Dissections. World Scientific: New Jersey.Mon Sept 10 Yuval Peres
Microsoft Research
Title: Communication cost of consensus for nodes with limited memory Room LWSN 1142 1:00pm - 2:00pm. CS COLLOQUIUM Abstract: Consensus protocols are useful in distributed systems with limited power and bandwidth (e.g., miniature sensors, robot swarms, blockchains). We consider a model of n nodes trying to reach consensus. Each node is initially assigned a bit and we assume that the majority bit b is more common than 1-b by some constant factor > 1. Nodes communicate asynchronously: when a node's Poisson clock rings, the node can choose to either exchange information with a randomly selected node, or remain silent. The former action costs 1, and the latter costs 0. The goal is for all nodes to arrive at $b$ with high probability. Previous work has focused on minimizing the time to consensus; our aim is minimizing communication. We show that for reaching consensus at linear communication cost, order logloglog(n) bits of memory are necessary and sufficient. The best algorithms we know intrinsically select a set of "expert" nodes that learn the majority bit, and then disseminate this knowledge. A key step in the proofs is to distinguish when nodes can become "aware" of reaching the majority bit and stop communicating; this is impossible if their memory is too low.
Joint work with Giulia Fanti, Nina Holden and Gireeja Ranade.
Bio: Yuval Peres is a Principal Researcher at Microsoft Research in Redmond. He obtained his Ph.D. at the Hebrew University, Jerusalem in 1990, and later taught there and at the University of California, Berkeley. He has published more than 300 research papers and has co-authored books on Brownian motion, Markov chains, Probability on Networks, Fractals, and Game Theory. Yuval was awarded the Rollo Davidson Prize in 1995, the Loève Prize in 2001, and the David P. Robbins Prize in 2011. He was an invited speaker at the 2002 International Congress of Math. and in the 2008 European Congress, and was a plenary lecturer at the Math. Congress of the Americas in 2017. He has advised 22 Ph.D. students. He is a fellow of the American Math. Society and the Institute of Mathematical Statistics. In 2016 he was elected to the U.S. National Academy of Sciences.Wed Sept 12 Akash Kumar
Purdue
Title: Finding Minors in Sublinear time in Bounded degree graphs with (almost) optimal one-sided query complexity Room LWSN 3102 Abstract: Let be an undirected, bounded degree graph with vertices. Fix a finite graph , and suppose one must remove edges from to make it -minor free (for some small constant ). We give an -time randomized procedure that, with high probability, finds an -minor in such a graph. For an example application, suppose one must remove edges from a bounded degree graph to make it planar. This result implies an algorithm, with the same running time, that produces a or minor in . No sublinear time bound was known for this problem, prior to this result. By the graph minor theorem, we get an analogous result for any minor-closed property. Up to factors, this resolves a conjecture of Benjamini-Schramm-Shapira (STOC 2008) on the existence of one-sided property testers for minor-closed properties. Furthermore, our algorithm is nearly optimal, by an lower bound of Czumaj et al (RSA 2014). Prior to this work, the only graphs for which non-trivial property testers were known for -minor freeness are the following: being a forest or a cycle (Czumaj et al, RSA 2014), , -grid, and the -circus (Fichtenberger et al, Arxiv 2017).
Joint work with C. Seshadhri and Andrew StolmanWed Sept 19 Akash Kumar
Purdue
Title: Finding Minors in Sublinear time in Bounded degree graphs with (almost) optimal one-sided query complexity (contd.) Room LWSN Abstract: see above.
Joint work withWed Sept 26 Elena Grigorescu
Purdue
Title: Communication-Efficient Distributed Learning of Discrete Distributions Room TBD, 11:30-11:50 am Abstract: We initiate a systematic investigation of distribution learning when the data is distributed across multiple servers. The servers must communicate with a referee and the goal is to estimate the underlying distribution with as few bits of communication as possible. We focus on non-parametric density estimation of discrete distributions with respect to the l1 and l2 norms. We provide the first non-trivial upper and lower bounds on the communication complexity of this basic estimation task in various settings of interest.
Joint work with Ilias Diakonikolas, Jerry Li, Abhiram Natarajan, Krzysztof Onak, Ludwig Schmidt
AND
Simina Branzei
Purdue
Title: Online Learning with an Almost Perfect Expert Room LWSN TBD, 11:55-12:30 Abstract:We study the online learning problem where a forecaster makes a sequence of predictions using the advice of n experts. Our main contribution is to analyze the regime where the best expert makes at most b mistakes and to show that when b = o(log4(n)), the expected number of mistakes made by the optimal forecaster is at most log4(n) + o(log4(n)). We also describe an adversary strategy showing that this bound is tight.
Joint work with Yuval Peres.Wed Oct 10 Alexander Block
Purdue
Title: Secure Computation using Leaky Correlations (Asymptotically Optimal Constructions) Room LWSN 3133B, 11:30 am Abstract: Abstract: Most secure computation protocols can be effortlessly adapted to offload a significant fraction of their computationally and cryptographically expensive components to an offline phase so that the parties can run a fast online phase and perform their intended computation securely. During this offline phase, parties generate private shares of a sample generated from a particular joint distribution, referred to as the correlation. These shares, however, are susceptible to leakage attacks by adversarial parties, which can compromise the security of the entire secure computation protocol. The objective, therefore, is to preserve the security of the honest party despite the leakage performed by the adversary on her share. Prior solutions, starting with n-bit leaky shares, either used 4 messages or enabled the secure computation of only sub-linear size circuits. Our work presents the first 2-message secure computation protocol for 2-party functionalities that have T(n) circuit-size despite T(n)-bits of leakage, a qualitatively optimal result. We compose a suitable 2-message secure computation protocol in parallel with our new 2-message correlation extractor. Correlation extractors, introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS--2009) as a natural generalization of privacy amplification and randomness extraction, recover ``fresh'' correlations from the leaky ones, which are subsequently used by other cryptographic protocols. We construct the first 2-message correlation extractor that produces T(n)-bit fresh correlations even after T(n)-bit leakage. Our principal technical contribution, which is of potential independent interest, is the construction of a family of multiplication-friendly linear secret sharing schemes that is simultaneously a family of small-bias distributions. We construct this family by randomly ``twisting then permuting'' appropriate Algebraic Geometry codes over constant-size fields.
Joint with Hemanta Maji, Hai Nguyen, Divya Gupta. (to appear in TCC 2018)Wed Jugal Garg
UIUC IE
Title: Fisher Markets and Nash Social Welfare Room LWSN Abstract: Fisher market equilibrium is a powerful solution concept that has found several surprising applications even in non-market settings which do not involve any exchange of money but only require the remarkable fairness and efficiency properties of equilibria, e.g., scheduling, auctions, mechanism design, fair division, etc. A very recent new application is approximation algorithms for maximizing the Nash social welfare (NSW) when allocating a set of indivisible items to a set of agents. In this talk, I will start with the Fisher market model and its connection with the NSW problem. Then, I will show how to design constant-factor approximation algorithms for maximizing the NSW using this connection.
Joint work withWed Oct 31 Shai Vardi
Purdue Krannert
Title: On the probe complexity of Local Computation Algorithms Room LWSN TBD, 12:15-1pm Abstract: The Local Computation Algorithms (LCA) model is a computational model aimed at problem instances with huge inputs and output. For graph problems, the input graph is accessed using probes: strong probes (SPs) specify a vertex $v$ and receive as a reply a list of $v$'s neighbors, and weak probes (WP) specify a vertex $v$ and a port number $i$ and receive as a reply $v$'s $i^{th}$ neighbor. Given a local query (e.g., ``is a certain vertex in the vertex cover of the input graph?''), an LCA should compute the corresponding local output (e.g., ``yes'' or ``no'') while making only a small number of probes, with the requirement that all local outputs form a single global solution (e.g., a legal vertex cover). We study the probe complexity of LCAs that are required to work on graphs that may have arbitrarily large degrees. In particular, such LCAs are expected to probe the graph a number of times that is significantly smaller than the maximum, average or even minimum degree. In this talk, I will focus on some results for strong probes. Our negative results include showing that there are graphs for which finding a \emph{maximal} matching requires $\Omega(\sqrt{n})$ strong probes. On the positive side, we design an LCA that finds a $(1-\eps)$ approximation to \emph{maximum} matching using a constant number probes in regular graphs.
Joint work with Uri Feige and Boaz Patt-Shamir.Thr Nov 16 Palma London
Caltech
Title: Frameworks for High Dimensional Parallel and Distributed Optimization Room LWSN TBD, 12-1pm Abstract: In this talk we present frameworks for solving high dimensional optimization problems, in which both the number of variables and the amount of data are huge. In these settings, practical applications require solvers to work at extreme scales and despite decades of work, existing solvers do not scale as desired in many cases. We present black-box acceleration frameworks for speeding up convex solvers, which can be used in either parallel or distributed settings. Given a huge problem, we develop dimension reduction techniques that allow the problem to be solved in a fraction of the original time, and make the computation more amenable to distributed or parallel computation. We present worst-case guarantees on both the quality of the solution and the speedup provided. In particular, we consider two settings of interest. First, we consider packing linear programming (LP). LP solvers are fundamental to many learning and inference problems. We present a framework that can speedup linear programming solvers, such as Cplex and Gurobi, by orders of magnitude, while maintaining nearly optimal solutions. The framework can be used for both linear programs and integer linear programs. Secondly, we consider convex problems with linear constraints, defined with respect to a graph, where the problem data is distributed across the nodes. We present a framework that uses far less communication than existing distributed techniques require and is robust to link failures in the graph. We show both theoretically and numerically that the approach uses orders of magnitude less communication than existing approaches, while maintaining nearly optimal solutions.
Joint work withFri Nov 30 Kent Quanrud
UIUC
Title: Submodular function maximization in parallel via the multilinear relaxation 12-1pm, Room LWSN Abstract: Balkanski and Singer recently initiated the study of adaptivity (or parallelism) for constrained submodular function maximization, and studied the setting of a cardinality constraint. Subsequent improvements for this problem by Balkanski, Rubinstein, and Singer and by Ene and Nguyen resulted in a near-optimal $(1-1/e-\eps)$-approximation in $O(\log n/\eps^2)$ rounds of adaptivity. Partly motivated by the goal of extending these results to more general constraints, we describe parallel algorithms for approximately maximizing the multilinear relaxation of a monotone submodular function subject to multiple packing constraints. We obtain a near-optimal $(1-1/e-\eps)$-approximation in polylogarithmic rounds of adaptivity. Our algorithm takes a continuous view point and combines several ideas ranging from the continuous greedy algorithm, its adaptation to the MWU framework for packing constraints, and parallel algorithms for packing LPs. Time permitting we will discuss some recent and ongoing work on this topic.
Joint work with Chandra Chekuri.
Links to schedule for Spring 2018, Fall 2017, Spring 2017, Fall 2016, Sprinig 2016, Fall 2015, Spring 2015, Fall 2014. Spring 2014, Fall 2013. Spring 2013. Fall 2012.