Theoretical CS Reading Group
The reading group meets once a week
usually, Thursday 4 pm-5 pm to discuss
a particular paper or topic.
Please join our mailing list to get notifications about the weekly meetings and discussions. You might also be interested in the Purdue Theory Seminar Fall 2021. Upcoming talks will be listed on this webpage regularly. If you wish to give a talk or present a paper please send an email to tmukherj at purdue.edu |
Spring 2021
Fall 2020 Spring 2019 Fall 2018 Spring 2018 Fall 2017 Spring 2017 Fall 2016 Spring & Summer 2016 Fall 2015 Spring & Summer 2015 Fall 2014 Spring 2014 Summer & Fall 2013 |
Fall 2021
Thursday, 9th September 2021 —
``Almost Envy-Freeness with General Valuations'', Benjamin Plaut and Tim Roughgarden
Presenter: Naomi Popkin
The goal of fair division is to distribute resources among competing players in a “fair" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
Thursday, 16th September 2021 —
``Allocation with Weak Priorities and General Constraints'', Joint work with Hai Nguyen, Thanh Nguyen, and Kemal Altinkemer
Presenter: Young-San Lin
We consider a resource allocation problem that combines three general features: complex resource constraints, weak priority rankings over the agents, and ordinal preferences over bundles of resources. We develop a mechanism based on a new concept called "Competitive Stable Equilibrium". It has several attractive properties, unifies two different frameworks of one-sided and two-sided markets, and extends existing methods to richer environments. Our framework also allows for an alternative and more flexible tie-breaking rule by giving agents different budgets. We empirically apply our mechanism to reassign season tickets to families in the presence of social distancing. Our simulation results show that our method outperforms existing ones in both efficiency and fairness measures.
Thursday, 23rd September 2021 —
``Time- and Space-Efficient Arguments from Groups of Unknown Order'', Joint work with Justin Holmgren, Alon Rosen, Ron D. Rothblum, Pratik Soni
Presenter: Alex Block
We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T⋅polylog(T) and space S⋅polylog(T), and the verifier runs in time n⋅polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated with a trusted setup from the hardness of factoring (products of safe primes), or without a trusted setup using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform. Our proof builds on DARK (Bünz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we: 1. Identify a significant gap in the proof of security of DARK. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption). In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.
Thursday, 30th September 2021 —
``A Theory of Universal Learning'', Joint work with Olivier Bousquet, Shay Moran, Ramon van Handel, and
Amir Yehudayoff.
Presenter: Dr. Steve Hanneke
How quickly can a given class of concepts be learned from examples? It is common to measure the performance of a supervised machine learning algorithm by plotting its "learning curve", that is, the decay of the error rate as a function of the number of training examples. However, the classical theoretical framework for understanding learnability, the PAC model of Vapnik-Chervonenkis and Valiant, does not explain the behavior of learning curves: the distribution-free PAC model of learning can only bound the upper envelope of the learning curves over all possible data distributions. This does not match the practice of machine learning, where the data source is typically fixed in any given scenario, while the learner may choose the number of training examples on the basis of factors such as computational resources and desired accuracy. In this work, we study an alternative learning model that better captures such practical aspects of machine learning, but still gives rise to a complete theory of the learnable in the spirit of the PAC model. More precisely, we consider the problem of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring uniformity over the distribution. The main result of this work is a remarkable trichotomy: there are only three possible rates of universal learning. More precisely, we show that the learning curves of any given concept class decay either at an exponential, linear, or arbitrarily slow rates. Moreover, each of these cases is completely characterized by appropriate combinatorial parameters, and we exhibit optimal learning algorithms that achieve the best possible rate in each case.
Thursday, 7th October 2021 —
``Towards a theory of non-commutative optimization'', Peter Bürgisser, Cole Franks, Ankit Garg, Rafael Oliveira, Michael Walter, Avi Wigderson
Presenter: Haoyu Song
Recently, people have been trying to develop algorithms for hard algebraic problems using the means of optimization. A summary of results is seen in this paper. I plan to talk about the following topics: (1) algorithmic invariant theory and why it is important (2) basics of group representation theory and lie groups (3) solving optimization problems that are not convex but geodesic convex and how traditional optimization techniques can be adapted (4) relationship between algorithmic invariant theory and geodesic convex optimization
Thursday, 14th October 2021 —
``Exponential Lower Bounds for Locally Decodable Codes Correcting Insertions and Deletions'', joint work with Jeremiah Blocki, Kuan Cheng, Elena Grigorescu, Xin Li, Yu Zheng.
Presenter: Minshen Zhu
Locally Decodable Codes (LDCs) are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. LDCs for Hamming errors have been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient to decode from large amounts of error, with small query complexity. Despite exciting progress, we still don't have satisfactory answers in several important parameter regimes. For example, in the case of 3-query LDCs, the gap between existing constructions and lower bounds is superpolynomial in the message length. In this work we study LDCs for insertion and deletion errors, called Insdel LDCs. Their study was initiated by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), who gave a reduction from Hamming LDCs to Insdel LDCs with a small blowup in the code parameters. On the other hand, the only known lower bounds for Insdel LDCs come from those for Hamming LDCs, thus there is no separation between them. Here we prove new, strong lower bounds for the existence of Insdel LDCs. In particular, we show that 2-query linear Insdel LDCs do not exist, and give an exponential lower bound for the length of all q-query Insdel LDCs with constant q. For q >= 3 our bounds are exponential in the existing lower bounds for Hamming LDCs. Furthermore, our exponential lower bounds continue to hold for adaptive decoders, and even in private-key settings where the encoder and decoder share secret randomness. This exhibits a strict separation between Hamming LDCs and Insdel LDCs. Our techniques are based on a delicate design and analysis of hard distributions of insertion and deletion errors, which depart significantly from typical techniques used in analyzing Hamming LDCs.
Thursday, 21st October 2021 —
``Asymptotic optimality and minimal complexity of classification by random projection'', joint work with Mireille Boutin
Presenter: Evzenie Coupkova
The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. Roughly speaking, the more complex the family, the greater the potential disparity between the training error and the population error of the classifier. This principle is embodied in layman’s terms by the well-known Occam’s razor principle, which suggests favoring low-complexity classifiers over complex ones. We study a family of low-complexity classifiers consisting of thresholding the one-dimensional feature obtained by projecting the data after embedding it into a higher dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n (based on its performance on training data) is chosen. We obtain a bound on the generalization error of these low-complexity classifiers. The bound is less than that of any classifier with a non-trivial VC dimension. We also show that the error of the classifiers converges to the optimal (Bayes) error as k and n go to infinity.
Thursday, 28th October 2021 —
``An Equivalence between private learning and online learning'', Mark Bun, Roi Livni, Shay Moran, Noga Alon, and Maryanthe Malliaris
Presenter: Tamalika Mukherjee
This talk will discuss two recent papers ([ALMM19], [BLM20]) that show a deep connection between two important areas in learning theory: online learning and differentially private learning
Thursday, 4th November 2021 —
"Approaching utopia: strong truthfulness and externality-resistant mechanisms", Amos Fiat, Anna Karlin, Elias Koutsoupias, Angelina Vidali
Presenter: Rohan Garg
We introduce and study strongly truthful mechanisms and their applications. We use strongly truthful mechanisms as a tool for implementation in undominated strategies for several problems, including the design of externality-resistant auctions and a variant of multi-dimensional scheduling.
Thursday, 11th November, 2021 —
"PROPm Allocations of Indivisible Goods to Multiple Agents", Artem Baklanov, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin
Presenter: Raymond Song
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
Thursday, 18th November 2021 —
``Approximating Nash Social Welfare under Binary XOS and Binary Subadditive Valuations'', joint work with Siddharth Barman
Presenter: Paritosh Verma
We study the problem of allocating indivisible goods among agents in a fair and economically efficient manner. In this context, the Nash social welfare--defined as the geometric mean of agents' valuations for their assigned bundles--stands as a fundamental measure that quantifies the extent of fairness of an allocation. Focusing on instances in which the agents' valuations have binary marginals, we develop essentially tight results for (approximately) maximizing Nash social welfare under two of the most general classes of complement-free valuations, i.e., under binary XOS and binary subadditive valuations. For binary XOS valuations, we develop a polynomial-time algorithm that finds a constant-factor (specifically 288) approximation for the optimal Nash social welfare, in the standard value-oracle model. The allocations computed by our algorithm also achieve constant-factor approximation for social welfare and the groupwise maximin share guarantee. These results imply that--in the case of binary XOS valuations--there necessarily exists an allocation that simultaneously satisfies multiple (approximate) fairness and efficiency criteria. We complement the algorithmic result by proving that Nash social welfare maximization is APX-hard under binary XOS valuations. Furthermore, this work establishes an interesting separation between the binary XOS and binary subadditive settings. In particular, we prove that an exponential number of value queries are necessarily required to obtain even a sub-linear approximation for Nash social welfare under binary subadditive valuations.
Thursday, 2nd December 2021 —
"Approximating 1-Wasserstein Distance between Persistence Diagrams by
Graph Sparsification", joint work with Tamal Dey
Presenter: Simon Zhang
Persistence diagrams (PD)s play a central role in topological data analysis. This analysis requires computing distances among such diagrams such as the 1-Wasserstein distance. Accurate computation of these PD distances for large data sets that render large diagrams may not scale appropriately with the existing methods. The main source of difficulty ensues from the size of the bipartite graph on which a matching needs to be computed for determining these PD distances. We address this problem by making several algorithmic and computational observations in order to obtain an approximation. First, taking advantage of the proximity of PD points, we condense them thereby decreasing the number of nodes in the graph for computation. The increase in point multiplicities is addressed by reducing the matching problem to a min-cost flow problem on a transshipment network. Second, we use Well Separated Pair Decomposition to sparsify the graph to a size that is linear in the number of nodes. Both node and arc sparsifications contribute to the approximation factor where we leverage a lower bound given by the Relaxed Word Mover’s distance. Third, we eliminate bottlenecks during the sparsification procedure by introducing parallelism. Fourth, we develop an open source software called PDoptFlow based on our algorithm, exploiting parallelism by GPU and multicore. We perform extensive experiments and show that the actual empirical error is very low. We also show that we can achieve high performance at low guaranteed relative errors, improving upon the state of the arts.
Thursday, 9th December 2021 —
``Prophet inequality for bipartite matching: Merits of being simple and non adaptive'', joint work with Nikolai Gravin
Presenter: Hongao Wang
We consider Bayesian online selection problem of a matching in bipartite graphs, i.e., online weighted matching problem with edge arrivals where online algorithm knows distributions of weights, that corresponds to the intersection of two matroids in Kleinberg and Weinberg [35] model. We consider a simple class of non adaptive vertex-additive policies that assign static prices to all vertices in the graph and accept each edge only if its weight exceeds the sum of the prices of the edge's endpoints. We show existence of a vertex-additive policy with the expected payoff of at least one third of the prophet's payoff and present gradient decent type algorithm that quickly converges to the desired vector of vertex prices. This improves the adaptive online policy of Kleinberg and Weinberg for the intersection of two matroids in two ways: our policy is non adaptive and has better approximation guarantee of 3 instead of previous guarantees 5.82 of Kleinberg and Weinberg and 2•e=5.43 of Feldman et al. [23] against the prophet. We give a complementary lower bound of 2.25 for any online algorithm in the bipartite matching setting.