1st GSSI Summer Meeting on Algorithms

  • Date July 9, 2016
  • Hour All day
  • Room GSSI Main Lecture Hall
  • Speaker many speakers
  • Area Computer Science


The Meeting will offer a view of current research in the field of Algorithms with presentations from leading researchers. The meeting is organized so to allow interaction and discussion with a 1-hour slot for each presentation including for 15 minute of discussions and comments.

Registration: is free and open to all interested in Algorithms. In order to register, please send an email with subject "SMA 2016 Registration" to This email address is being protected from spambots. You need JavaScript enabled to view it., containing First Name, Last Name and Institution of the participant(s).


  • Pierre Fraigniaud (University Paris-Diderot)
    "Distributed Testing of Excluded Subgraphs"
  • Seffi Naor (Technion)
    "Multi-label classification with pairwise relations"
  • Yuval Rabani (The Hebrew University of Jerusalem)
    "Market dynamics of best-response with lookahead"
  • Paul Spirakis (University of Liverpool)
    "The Complexity of Greedy Matchings"
  • Moti Yung (Columbia University)
    "Crossing the Theory-Practice Chasm: On Deploying Secure Computations Commercially"


Michele Flammini (University of L'Aquila and GSSI)
Giuseppe Persiano (University of Salerno)


Gianlorenzo D'Angelo (GSSI)
Mattia D'Emidio (GSSI)


9:15-9:30 Welcome and opening
9:30-10:30 Paul Spirakis - The Complexity of Greedy Matchings
10:30-10:45 Break
10:45-11:45 Moti Yung - Crossing the Theory-Practice Chasm: On Deploying Secure Computations Commercially
11-45-12:00 Coffee break
12:30-13:30 Yuval Rabani - Market dynamics of best-response with lookahead
13:30-14:30 Lunch break
14:30-15:30 Pierre Fraigniaud - Distributed Testing of Excluded Subgraphs
15:30-15:45 Coffee break
15:45-16:45 Seffi Naor - Multi-label classification with pairwise relations


Speaker: Pierre Fraigniaud (University Paris-Diderot)
Title: Distributed Testing of Excluded Subgraphs 
Abstract: We study property testing in the context of distributed computing, under the classical CONGEST model. It is known that testing whether a graph is triangle-free can be done in a constant number of rounds, where the constant depends on how far  the input graph is from being triangle-free. We show that, for every connected 4-node  graph H, testing whether a graph is H-free can  be done in a constant number of rounds too. The constant also depends on how far the input graph is from being H-free, and the dependence is identical to the one in the case of testing triangles. Hence, in particular, testing whether a graph is K_4-free, and testing whether a graph is C_4-free can be done in a constant number of rounds (where K_k denotes the k-node clique, and C_k denotes the k-node cycle). On the other hand, we show that testing K_k-freeness and C_k-freeness for k ≥ 5 appear to be much harder. Specifically, we investigate two natural types of generic  algorithms for testing H-freeness, called DFS tester and BFS tester. The latter captures the previously known algorithm to test the presence of triangles, while the former captures our generic algorithm to test the presence of a 4-node graph pattern H. We prove that both DFS and BFS testers fail to test K_k-freeness and C_k-freeness in a constant number of rounds for k ≥ 5. 

Joint work with Ivan Rapaport, Ville Salo, and Ioan Todinca 

Speaker: Seffi Naor 
Title: Multi-label classification with pairwise relations 
Abstract: Motivated by applications in multi-label learning, we introduce the metric multi-labeling problem. The objective here is to classify objects by labels while optimizing a linear cost function of both assignment costs and separation costs, which are deduced from pairwise relations between objects. Each object can be classified by multiple labels, which may have either positive or negative costs, thus departing from previous works, e.g., metric labeling.. The metric multilabeling problem is NP-hard, and we tackle it by formulating an integer program capturing the deviation from a benchmark representing an "ideal" labeling. This approach, reminiscent of the notion of regret in online learning, allows us to cope with the possible negativity of the labeling costs. We develop a tight 2-approximation algorithm for metric multi-labeling by using a counterintuitive approach that distorts the optimal likelihood values computed by our linear programming relaxation. 

Joint work with Shahar Chen, Dotan Di Castro, Zohar Karnin, Liane Lewin-Eytan, and Roy Schwartz. 

Speaker: Yuval Rabani
Title: Market dynamics of best-response with lookahead 
Abstract: Most models of the dynamics of economic exchange fall into one of two frameworks. In one approach, the models assume full rationality of the participating agents, who unravel the entire evolution of the exchange and choose in advance all possible reactions optimally. This is unrealistically prescient, contradicts experience, hinders on-the-fly adaptation to unexpected changes, and does not illuminate out-of-equilibrium behavior. An alternative approach examines no-regret reactions that indeed model reactions to unexpected scenarios out-of-equilibrium, but those reactions have to be damped carefully for a desirable outcome to materialize, and they lack strategic justification. 
We present a "control theoretic" approach to the dynamics of economic exchange, based on limited lookahead situational analysis of the participating agents. It is motivated by and generalizes the level k model in which a level 0 player adopts a very simple response to current conditions, a level 1 player best-responds to a model in which others take level 0 actions, and so forth. (This is analogous to k-ply exploration of game trees in AI, and to receding-horizon control in control theory.) If the participants have deterministic mental models with this kind of finite-level response, there is obviously no way their mental models can all be consistent. Nevertheless, there is experimental evidence that people act this way in many situations, motivating the question of what the dynamics of such interactions lead to. 
We address this question in the setting of Fisher Markets with constant elasticities of substitution (CES) utilities, in the weak gross substitutes (WGS) regime. We show that despite the inconsistency of the mental models, and even if players’ models change arbitrarily from round to round, the market converges to its unique equilibrium. (We show this for both synchronous and asynchronous discrete-time updates.) Moreover, the result is computationally feasible in the sense that the convergence rate is linear, i.e., the distance to equilibrium decays exponentially fast. To the best of our knowledge, this is the first result that demonstrates, in Fisher markets, convergence at any rate for dynamics driven by a plausible model of seller incentives. Even for the simple case of (level 0) best-response dynamics, where we observe that convergence at some rate can be derived from recent results in convex optimization, our result is the first to demonstrate a linear rate of convergence. 

Joint work with Krishnamurthy Dvijotham and Leonard J. Schulman. 

Speaker: Paul Spirakis (University of Liverpool) 
Title: The Complexity of Greedy Matchings 
Abstract: Motivated by the fact that in several cases a matching in a graph is stable if and only if it is produced by a greedy algorithm, we study the problem of computing a maximum weight greedy matching on weighted graphs, termed Greedy. In wide contrast to the maximum weight matching problem, for which many efficient algorithms are known, we prove that GreedyMatching is strongly NP-hard and APX-complete, and thus it does not admit a PTAS unless P=NP, even on graphs with maximum degree at most~3 and with at most three different integer edge weights. Furthermore we prove that Greedy is strongly NP-hard if the input graph is in addition bipartite. Moreover we consider two natural parameters of the problem, for which we establish a sharp threshold behavior between NP-hardness and tractability. On the positive side, we present a randomized approximation algorithm (Rgma) for Greedy on a special class of weighted graphs, called bush graphs. We highlight an unexpected connection between Rgma and the approximation of maximum cardinality matching in unweighted graphs via randomized greedy algorithms. We show that, if the approximation ratio of Rgma is rho, then for every epsilon >0 the randomized Mrg algorithm of (Aronson et. al. in RSA 1995) gives a (rho-epsilon)-approximation for the maximum cardinality matching. We conjecture that a tight bound for rho is 2/3; we prove our conjecture true for two subclasses of bush graphs. Proving a tight bound for the approximation ratio of Mrg on unweighted graphs (and thus also proving a tight value for rho) is a long-standing open problem (Poloczek and Szegedy , FOCS 2012). This unexpected relation of our Rgma algorithm with the Mrg algorithm may provide new insights for solving this problem. 

Joint work with A. Deligkas and G. Mertzios

Speaker: Moti Yung (Columbia University)
Title: Crossing the Theory-Practice Chasm: On Deploying Secure Computations Commercially
Abstract: In 1978 the first Cryptographic Secure Protocols (Mental Poker) were conceived, as the third generation of modern cryptography (following symmetric key technology in the early 70's, and public key technology and key exchange in 76).
Technological innovations in security and privacy are critical to advancing modern computing in our time. I will present an  effort involving deployment of commercial applications designed and built as a 'secure multi-party computation protocol for specific tasks,' to be used repetitively to achieve a number of concrete ubiquitous business goals. In these applications, the outputs are calculated in the presence of privacy constraints which prevent parties from sharing their individual inputs directly and openly. I will also discuss what I think are the reasons for the inherent difficulty of developing such routines in general (for achieving actual business goals). In particular, I will survey what I believe to be the reasons that about 40 years since secure computation protocols was invented as a basic theoretical notion, capturing specific and then general computational tasks, and in spite of its theoretical and even more recent commendable experimentation success, the notion has not yet been widely and seriously used in achieving routine relevant business goals (in contrast with symmetric key and public key cryptosystems and protocols, which were also proposed ~40 years ago and are used extensively, primarily to implement secure authenticated channels). I will cover some of the basic methodology taken to deploy the technology, as a prototypical example of the essential role of theoreticians in bridging practice and theory.