Program

The school has two weeks of courses, plus talks and poster sessions. Below is a tentative program; come back often to check for eventual changes. You can also download a PDF version of the program.

First Week

Time Mon 18 Tue 19 Wed 20 Thu 21 Fri 22
08:00 – 08:45 Registration
(all day long)
08:45 – 09:00 Welcome session
09:00 – 10:00 Upfal Lucchesi Robins Williamson Kohayakawa
10:00 – 10:15 — Coffee —
10:15 – 11:15 Kohayakawa Upfal Williamson Kohayakawa Talk 3
11:15 – 12:15 Lucchesi Kleinberg Talk 1 Robins
12:15 – 13:30 — Lunch —
13:30 – 14:30 Kleinberg Kohayakawa Upfal Upfal Lucchesi
14:30 – 15:30 Williamson Robins Lucchesi Talk 2 Williamson
15:30 – 16:30 Coffee — Coffee & Posters —
Opening (16:00)
16:30 – 17:30 Opening — Classwork — Robins
17:30 – 18:30 Music — Classwork —

Abstracts

Sample complexity and uniform convergence
Eli Upfal, Brown University, USA

Sampling is a powerful technique, which is at the core of statistical data analysis and machine learning. Using a finite, often small, set of observations, we attempt to estimate properties of an entire sample space. How good are estimates obtained from a sample? Any rigorous application of sampling requires an understanding of the sample complexity of the problem – the minimum size sample needed to obtain the required results. In this course we will cover some of the rich mathematical theory that has been develop in recent years to address this question, in particular in the context of statistical machine learning and rigorous data mining.

Main topics:

  1. Review of large deviation bounds: Chernoff, Hoeffding, Azuma, McDiarmid bounds (adapted to the students background).
  2. Uniform convergence – Glivenko-Cantelli theorem.
  3. VC-dimension.
  4. The \(\epsilon\)-net and \(\epsilon\)-sample theorems.
  5. Applications to sample complexity in machine learning and data mining.
  6. Rademacher complexity.
  7. Sample complexity through Rademacher complexity.
  8. Applications.

The regularity method and blow-up lemmas for sparse graphs
Yoshiharu Kohayakawa, Universidade de São Paulo, Brazil

A fundamental result in extremal graph theory is the regularity lemma of Szemerédi (1976). The combined use of the regularity lemma and an embedding or a subgraph counting lemma is now usually called the regularity method.

The power of the regularity method can be appreciated when one observes that the removal lemma (1978) and the asymptotic formula for the Turán number of general graphs, due to Erdös, Stone and Simonovits (1946, 1966), have straightforward proofs based on the regularity method.

In its simplest form, the regularity method allows one to investigate the existence of a given fixed graph as a subgraph of a large graph. Komlós, Sárközy and Szemerédi (1997) strengthened the regularity method by developing the blow-up lemma, which, combined with the regularity lemma, allows one to address problems in which the subgraphs being sought are large, for instance, when they are spanning subgraphs (e.g., the \(k\)th power of a Hamilton cycle (1998)).

Owing to the work of several researchers, including Balogh, Conlon, Fox, Gowers, Morris, Rödl, Samotij, Saxton, Schacht, Thomason, and Zhao, the regularity method has also been successfully strengthened to handle graphs with a subquadratic number of edges, that is, one now knows quite well how to apply this method in the sparse setting, at least when one investigates the existence of small subgraphs. What can one do if one is after large subgraphs? Are there blow-up lemmas in the sparse setting?

This course will start with a recap on the basics of the regularity method. We shall discuss in more detail the blow-up lemma and its classical applications. We shall then move on to the discussion of the regularity method in the sparse setting. Novel results on the blow-up lemma in the sparse setting, currently under development in collaboration with P. Allen, J. Böttcher, H. Hàn and Y. Person, will form the last and original part of the course.


Combinatorial stochastic search and selection
Robert Kleinberg, Cornell University, USA

This course will focus on problems that involve designing algorithms for sequential decision problems in the face of uncertainty about future inputs, and combinatorial constraints on the feasible sets of decisions the algorithm may make. One examplar is the Matroid Secretary Problem, in which the elements of a matroid have (initially unknown) non-negative weights, they are presented to the algorithm in random order, and it must decide upon seeing each element whether or not to select it, with the constraint that the selected elements must form an independent set in the matroid, and with the goal of maximizing the sum of the weights. (The classical secretary problem corresponds to the special case of a rank-one matroid.) Other topics in the same circle of ideas are prophet inequalities, online matching algorithms, and combinatorial generalizations of Weitzman’s “Pandora’s Box” model of optimal search.


The perfect matching polytope, solid bricks and the perfect matching lattice
Cláudio L. Lucchesi, Universidade Federal do Mato Grosso do Sul, Brazil

Fifty years ago, Jack Edmonds gave a characterization of the perfect matching polytope. It consists of three kinds of restrictions: (i) nonnegativity, (ii) regularity and (iii) restrictions on odd cuts.

The number of odd cuts is exponentially large. Yet there are certain graphs that do not require any restrictions of the third kind at all. It is well known that this is the case of bipartite graphs. We present a characterization of graphs whose perfect matching polytopes are described solely by the first two kinds of restrictions, they are called solid graphs.

Clearly, the linear space spanned by perfect matchings has a basis consisting solely of perfect matchings. However, we also explore the observations and techniques used for the first result and give a proof for the existence of a basis for the perfect matching lattice consisting solely of perfect matchings. We also pose the open problem of polynomial recognition of solid graphs. All this material is joint work with Marcelo H. de Carvalho and U. S. R. Murty. We shall distribute notes containing most of the results discussed in the course.


Recent progress in approximation algorithms for the traveling salesman problem
David Williamson, Cornell University, USA

After several decades of little to no progress, the past few years have brought significant progress in designing approximation algorithms for special cases and variants of the traveling salesman problem (TSP)—and yet big questions remain open. In these lectures, I will go through some of the recent results for the graph TSP, the \(s\)-\(t\) path TSP, and the asymmetric TSP, and discuss possible approaches to the remaining open questions.


Harmonic analysis on polytopes and cones
Sinai Robins, University of São Paulo, Brazil

The course develops results in the discrete geometry of polytopes and polyhedral cones, mostly from the perspective of Harmonic analysis, and we introduce all necessary techniques from first principles, assuming some knowledge of Linear Algebra and Complex Analysis. One of the motivating themes is the question of how to discretize the volume of a polytope or a cone. It is straightforward to see that there are infinitely many ways to do so, and we will focus on at least two discretizations, the first of which relies on a higher dimensional generalization of the concept of an angle. The other most popular discretization of volume is the counting of lattice points in a polytope, and has combinatorial underpinnings. One of the recurring themes for us is the analytic tool called the Poisson summation formula, and there are many interesting number-theoretic corollaries of some of these developments.

We begin with examples in \(2\) and \(3\) dimensions, and by computing the Laplace transform of cones, and find that they are rational functions. We then give a new proof of a classical (1988) theorem of Brion, which essentially tells us that the Laplace transform of a polytope is equal to the sum of the Laplace transforms of its tangent cones.

While finding formulas for volumes of \(d\)-dimensional polytopes, we introduce a higher-dimensional analogue of an angle, which is called a “solid angle”, namely the intersection of a \(d\)-dimensional sphere with a cone. Finally, introduce and study cone theta functions, from first principles, a natural discrete analogue of the \(d\)-dimensional solid angle. We will also study some Euler-Maclaurin summation formulae over polyhedral cones and polytopes, which are natural extensions of the classical \(1\)-dimensional Euler-Maclaurin summation formula on an interval.

Second Week

Time Mon 25 Tue 26 Wed 27 Thu 28 Fri 29
09:00 – 10:00 Kostochka Tunçel Kráľ Tunçel Oliveira
10:00 – 10:15 — Coffee —
10:15 – 11:15 Tunçel Kráľ Morris Kráľ Kostochka
11:15 – 12:15 Talk 4 Miyazawa Miyazawa Kostochka Talk 6
12:15 – 13:30 — Lunch —
13:30 – 14:30 Kráľ Kostochka Tunçel Oliveira
14:30 – 15:30 Morris Morris Talk 5 Morris
15:30 – 16:30 — Coffee & Posters —
16:30 – 17:30 — Classwork —
17:30 – 18:30 Talent Show — Classwork —

Abstracts

Coloring sparse graphs with few colors
Alexandr Kostochka, University of Illinois at Urbana-Champaign, USA

It is natural that graphs with fewer edges are easier to color. The structure of the lectures is as follows.

  1. will describe the history of lower bounds on the minimum number of edges in \(k\)-color-critical graphs with a given number of vertices and prove some of these bounds. In particular, we prove the case \(k=4\) of Gallai’s Conjecture from 1963 on the topic.
  2. We describe some applications of these bounds. In particular, we give short proofs of Grötzsch’s Theorem and of Axenov-Grünbaum Theorem on 3-coloring of planar graphs.
  3. We describe some constructions of sparse color-critical graphs and hypergraphs.
  4. Some unsolved problems will be discussed.

The method of hypergraph containers
Robert Morris, IMPA, Brazil

Given a (fixed) graph \(H\), consider (for each \(n \in \mathbb{N}\)) the family \({\mathcal F}_n(H)\) graphs on \(n\) vertices that do not contain a copy of \(H\) as a subgraph. This family has been a central object of study in extremal combinatorics for over 70 years, since the groundbreaking work of Turán, Erdös and Stone, and others, on the maximum number of edges in a member of \({\mathcal F}_n(H)\). Over the intervening decades, a huge amount of work has been put into understanding this problem in various special cases (see 2), but nevertheless many basic questions remain unanswered, most notably when \(H\) is bipartite (see 3). Another famous question asks for the minimum independence number of a member of \({\mathcal F}_n(H)\); indeed, this is equivalent to determining the Ramsey numbers \(r(H,K_t)\).

More recently, beginning in the 1970s but most notably since the 1990s, a new direction of research has attracted a great deal of attention (see 4). The basic questions are as follows: How many \(H\)-free graphs are there with \(n\) vertices? What is the typical structure of such a graph (perhaps with a given number of edges)? And what can one say about the \(H\)-free subgraphs of the Erdös-Rényi random graph \(G(n,p)\)? (Similar questions can of course be asked in related combinatorial settings, e.g., for sets of integers containing no \(k\)-term arithmetic progression, or graphs containing no induced copy of \(H\).)

In this course we will discuss a recently-developed technique (see 1 and 5) that has proved to be extremely useful in the study of such problems. In order to illustrate the method, we will consider the number of \(C_4\)-free graphs, the typical structure of a \(K_r\)-free graph, and the extremal problem in \(G(n,p)\) for general \(H\). We will also briefly discuss how the method can be applied to a diverse variety of discrete objects, such as sum-free sets of integers, colourings of graphs, and metric spaces.

  1. J. Balogh, R. Morris and W. Samotij, Independent sets in hypergraphs, J. Amer. Math. Soc. 28 (2015), pp. 669–709.
  2. B. Bollobás, Modern Graph Theory, Springer, 1998.
  3. Z. Füredi and M. Simonovits, The history of degenerate (bipartite) extremal graph problems, Erdös Centennial, Bolyai Soc. Math. Studies, 25 (2013) (eds L. Lovász, I. Ruzsa and V. T.-Sós) pp. 169–264.
  4. V. Rödl and M. Schacht, Extremal results in random graphs, Erdös Centennial, Bolyai Soc. Math. Studies, 25 (2013) (eds L. Lovász, I. Ruzsa and V. T.-Sós) pp. 535–583.
  5. D. Saxton and A. Thomason, Hypergraph containers, Inventiones Math. 201 (2015), pp. 1–68.

Graph limits and their applications in extremal combinatorics
Daniel Kráľ, University of Warwick, UK

Combinatorial limits give an analytic way to represent large discrete objects. The theory of combinatorial limits led to development new techniques in discrete mathematics and provided new vies on existing notions and concepts. It has also opened new links between analysis, combinatorics, computer science, group theory and probability theory. Combinatorial limits are also closely related to the flag algebra method which led to solving several long-standing open problems in extremal combinatorics.

The tutorial will be focused on limits of dense graphs, which form the best understood case in the theory of combinatorial limits, and the flag algebra method. The lectures will be complemented by exercise problems of various difficulty to give the participants an opportunity to practice the presented material.

The tentative syllabus of the tutorial is the following:

  1. Introduction – dense graph convergence, graph limits.
  2. Existence and uniqueness of graph limit representations.
  3. Flag algebras and their relation to graph limits.
  4. Graph quasirandomness via graph limits.
  5. Use of flag algebras via SDP in extremal graph theory.

Depending on the interest of the participants, some of the following topics will also be discussed at the end of the tutorial: finite forcibility of graph limits, limits of sparse graphs, limits of other discrete objects (hypergraphs, partial orders, permutations).


Approximation Algorithms for Packing Circles
Flávio Keidi Miyazawa, Unicamp, Brazil

We show some techniques to obtain approximation algorithms for the circle bin packing problem.

For the offline case, we present an approach to obtain asymptotic approximation schemes that is valid for packing circles, spheres and more generally, \(d\)-dimensional spheres under the \(L_p\)-norm. The idea is to iteratively separate small and large items, and may be extended to a wide range of packing problems that satisfy certain conditions. These extensions comprise problems with different kinds of items, such as regular polygons, or with bins of different shapes, such as circles and spheres. As an example, we obtain APTAS’s for the problems of packing \(d\)-dimensional spheres into hypercubes under the \(L_p\)-norm and an APTAS for the strip packing version of these problems.

For the online case, we present an approach to obtain competitive algorithms for the online bounded space circle bin packing problem. The techniques used are borrowed from the rectangle packing version and we show how to obtain competitive algorithms and lower bounds for any constant competitive algorithm.


Upper bounds for geometric packing problems
Fernando Mário de Oliveira Filho, USP, Brazil

Geometric packing problems ask us to pack copies of given geometrical objects in a given space so as to cover as much of the space as possible (i.e., so as to maximize the packing density). A classical example is the sphere packing problem, which asks us the maximum fraction of Euclidean space that can be covered by pairwise nonoverlapping unit spheres. There are many other examples: we may wish to pack rotated and translated copies of polygons on the plane, spherical caps on the surface of the sphere, or translated copies of tetrahedra on \(\mathbb{R}^3\).

On the one hand, one wishes to construct packings, obtaining then lower bounds for the maximum packing density. On the other hand there is the nonconstructive task of proving that no packing can ever exceed a given density, that is, of finding upper bounds. This course will focus on methods that can be used to provide upper bounds for the packing density. These methods rely on semidefinite programming both for the formulation of the upper bounds and their computation. They also rely heavily on harmonic analysis.


Semidefinite Programming Techniques in Combinatorial Optimization
Levent Tunçel, University of Waterloo, Canada

We will start with the discussion of various forms of Semidefinite Programming (SDP) problems and some necessary background (no previous background on SDPs is required). Then, we will formulate various problems in combinatorial optimization, graph theory, and discrete mathematics in general either as SDP problems or as nonconvex optimization problems with natural and useful SDP relaxations. We will continue with some geometric representations of graphs as they relate to SDP, more recent work on Lovász theta body and its extensions, lift-and-project methods, and conclude with some of the more recent work in these research areas and the research area of lifted SDP-representations (or extended formulations) and some open research problems.

Talks

Talk 1: Adaptive rumor spreading
Marcos Kiwi, Universidad de Chile

Rumor spreading is a basic model for information dissemination in a social network. In this setting a central concern for an entity, say the service provider, is to find ways to speed up the dissemination process and to maximize the overall information spread. However, a central issue absent in this setting is that of adaptivity. How can the service provider use information about the current state of the network to cost effectively speed up the process?

Motivated by the recent emergence of the so-called opportunistic communication networks, we take a novel approach by considering the issue of adaptivity in the most basic continuous time (asynchronous) rumor spreading process. In our setting a rumor has to be spread to a population and the service provider can push it at any time to any node in the network and has unit cost for doing this. On the other hand, as usual in rumor spreading, upon meeting, nodes share the rumor and this imposes no cost on the service provider. We consider the cost version of the problem with a fixed deadline and ask for a minimum cost strategy that spreads the rumor to every node. A non-adaptive strategy can only intervene at the beginning and at the end, while an adaptive strategy has full knowledge and intervention capabilities. Our main result is that in the homogeneous case (where every pair of nodes randomly meet at the same rate) the benefit of adaptivity is bounded by a constant. This requires a subtle analysis of the underlying random process that is of interest in its own right. Joint work with J.R. Correa (U. Chile), N. Olver (VU University Amsterdam; and CWI, Amsterdam), and A. Vera (U. Chile).


Talk 2: Determining the rank of some graph convexities
Jayme L. Szwarcfiter, UFRJ

A graph convexity is a pair \((V(G), {\mathcal C})\), where \(V(G)\) is the vertex set of a finite graph \(G\), and \({\mathcal C}\) a family of subsets of \(V(G)\), each called convex, where \(\emptyset, V(G) \in {\mathcal C}\), and \({\mathcal C}\) is closed under intersections. The most studied graph convexities are based on paths. The geodetic, monophonic and \(P_3\) convexities are those in which \({\mathcal C}\) is closed under shortest paths, induced paths and common neighbors, respectively. For a set \(S \subseteq V(G)\) the convex hull of \(S\), denoted \(H(S)\) is the minimum convex set containing \(S\). Say that \(S\) is convexly independent when \(v \not \in H(S \setminus \{v\})\), for any \(v \in S\). The rank of \((V(G), {\mathcal C})\) is the size of the largest convexly independent set. We study the determination of the rank in the above path convexities.


Talk 3: Progression-free sets, the polynomial method, and arithmetic removal lemmas
Robert Kleinberg, Cornell University, USA

Letting \(C_m\) denote the cyclic group \(Z/mZ\), what is the largest
subset of \((C_m)^n\) having no three distinct elements in arithmetic
progression? Up until a couple of weeks ago, the best known upper bounds
were barely better than the trivial upper bound of \(m^n\), even for the case
of \(m=3\), when the question is known as the “cap set” problem. In a
breakthrough just this month, Croot, Lev, and Pach found a way to prove a
non-trivial upper bound for \(m=4\) using the polynomial method. This was soon
extended by Ellenberg and (independently) Gijswijt to derive an upper bound
of \((0.95*m)^n\) for all \(m>2\). The proof is simple, beautiful, and unbelievably
short.

The Croot-Lev-Pach lemma has subsequently found other applications: for
example, Fox and Lovasz have used it to give an optimal bound for Green’s
“arithmetic triangle removal lemma” in vector spaces over \(F_p\). Blasiak,
Church, Cohn, Grochow, and Umans have shown that it refutes many (but not
all) of the conjectured approaches for applying the group-theoretic fast
matrix multiplication paradigm of Cohn-Umans to prove that the exponent of
matrix multiplication equals 2. In this talk, I will give some historical
background on these problems, present the proofs of the new upper bounds
for progression-free sets, present a nearly-matching lower bound for a
“tripartite” version of the progression-free set problem, and discuss the
application to arithmetic removal lemmas.


Talk 4: Random models of 21st century networks and their connectivity structure
Bruce Reed, McGill University, Canada

The traditional Erdös–Rényi model of a random network is of little use in modeling the type of complex networks which modern researchers study. It postulates that each node has the same likelihood of being attached to every other node. However, in, e.g. the web, certain authoritative pages will have many more links entering them. A 1995 paper of Molloy and Reed, cited over 1500 times, sets out some conditions guaranteeing the existence of a giant component in a graph with a specified degree sequence. This work has attracted such a great deal of attention because it can be applied to random models of a wide range of complicated 21st century networks such as the web or biological networks operating at a sub-molecular level. A heuristic argument suggests that a giant component will exist provided the sum of the squares of the degrees of the vertices of the network is at least twice the sum of the degrees. Molloy and Reed proved that this is indeed true subject to certain technical conditions. Many authors, have obtained related results by specifying different technical conditions, or by tying down the size of the giant component.

Since the interest in this result is its wide applicability, it is natural to try and prove it under as few assumptions as possible. Recently, Joos, Perarnau-Llobet, Rautenbach, and Reed proved the result under essentially no conditions.

I will present, in an accessible way, a variety of complex networks and their random models to which the Molloy Reed result has been applied. I will then sketch briefly the proof of our result and how it differs from the proof of the Molloy-Reed result.


Talk 5: On The Structure of Almost Every \(H\)-Free Graph, When \(H\) is a Tree or a Cycle
Bruce Reed, McGill University, Canada


Talk 6: Solving NP-hard geometric optimization problems to optimality
Cid Carvalho de Souza, UNICAMP

There are several NP-hard problems investigated by researchers in Computational Geometry for which no exact algorithms have been developed and/or tested. The common approach in the field is to look for approximation algorithms. In practice, these algorithms often produce a solution whose cost is far from the optimum. In this talk we show that large-sized instances of some geometric problems can be solved to proven optimality when integer programming techniques are combined in a clever way with geometrical properties. Among the successful examples, we discuss the well-known Art Gallery Problem.