Time |
Speaker |
Title |
10:00–10:20 | Registration & Breakfast | |
10:20–10:40 | Arthur van Goethem (TU Eindhoven) |
Labelling groups of islands |
A variety of recurring geographic entities form collections of point, line or area features. Examples are groups of islands (Archipelago), relief features in deserts, periglacial lakes or geomorphological forms, such as drumlins and sinkholes. All these groups of features might be best identified with a single label. Surprisingly, the (automated) labelling of groups of features has received little attention so far. We investigate the labelling of feature groups by comparing manually generated labels to a set of formally defined, algorithmically optimal placements. By taking this systematic approach, we hope to detect if, and which, formal measures are, possibly subconsciously, used by cartographers. In the second part of the talk we focus on some of the algorithms required to compute the algorithmically optimal labels. To be precise, we will look at straight-line and circular-arc labels that may (or may not) overlap a given set of islands. We present efficient algorithms to compute the line or circle minimizing the maximum distance to the islands, measured by the closest distance to each single island. Part of this presentation was presented at the International Cartographic Conference 2015, the other part involves work in progress. |
||
10:40–11:00 | Philipp Kindermann (FernUniversität in Hagen) |
Windrose Planarity: Embedding Graphs with Direction-Constrained Edges |
Given a planar graph \(G(V,E)\) and a partition of the neighbors of each vertex \(v \in V\) in four sets \(UR(v)\), \(UL(v)\), \(DL(v)\), and \(DR(v)\), the problem Windrose Planarity asks to decide whether \(G\) admits a windrose-planar drawing, that is, a planar drawing in which (i) each neighbor \(u \in UR(v)\) is above and to the right of \(v\), (ii) each neighbor \(u \in UL(v)\) is above and to the left of \(v\), (iii) each neighbor \(u \in DL(v)\) is below and to the left of \(v\), (iv) each neighbor \(u \in DR(v)\) is below and to the right of \(v\), and (v) edges are represented by curves that are monotone with respect to each axis. By exploiting both the horizontal and the vertical relationship among vertices, windrose-planar drawings allow to simultaneously visualize two partial orders defined by means of the edges of the graph. Although the problem is NP-hard in the general case, we give a polynomial-time algorithm for testing whether there exists a windrose-planar drawing that respects a combinatorial embedding that is given as part of the input. This algorithm is based on a characterization of the plane triangulations admitting a windrose-planar drawing. Furthermore, for any embedded graph admitting a windrose-planar drawing we show how to construct one with at most one bend per edge on an \(O(n) \times O(n)\) grid. The latter result contrasts with the fact that straight-line windrose-planar drawings may require exponential area. | ||
11:00–11:20 | Marc van Kreveld (Utrecht University) |
Quality Measures in Graph Drawing |
Algorithms researchers like to compare: which algorithm is better for some well-defined task is determined by the function inside the big-O. Furthermore, we like ratios: an approximation algorithm for an NP-hard optimization problem realizes an approximation ratio, and an on-line algorithm for an on-line problem realizes a competitive ratio. Finally, we like worst-case analysis: efficiency, approximation and competitiveness is determined by worst-case input. In this talk we will use the custom to use worst-case analysis and ratios on graph drawing quality, when comparing different styles for drawing a graph. We obtain results like: When considering planar straight-line drawings and planar circular-arc drawings, the latter can be six times as good as the former when considering angular resolution. We consider several drawing styles and several quality measures. | ||
11:20–11:40 | Thom Castermans (TU Eindhoven) |
Mosaic Drawings and Cartograms |
Cartograms visualize quantitative data about a set of regions such as countries or states. There are several different types of cartograms and — for some — algorithms to automatically construct them exist. We focus on mosaic cartograms: cartograms that use multiples of simple tiles — usually squares or hexagons — to represent regions. Mosaic cartograms communicate well data that consist of, or can be cast into, small integer units (for example, electorial college votes). In addition, they allow users to accurately compare regions and can often maintain a (schematized) version of the input regions' shapes. We propose the first fully automated method to construct mosaic cartograms. To do so, we first introduce mosaic drawings of triangulated planar graphs. We then show how to modify mosaic drawings into mosaic cartograms with low cartographic error while maintaining correct adjacencies between regions. We validate our approach experimentally and compare to other cartogram methods. | ||
11:40–12:00 | Coffee break | |
12:00–12:50 | Rien van de Weijgaert (Rijksuniversiteit Groningen) invited speaker |
Tessellations and the Geometry of the Cosmic Web |
12:50–14:10 | Lunch | |
14:10–14:30 | Amer Krivošija (TU Dortmund) |
Clustering time series under the Frechet distance |
The Frechet distance is a popular distance measure for curves. We study the problem of clustering time series under the Frechet distance. In particular, we give \((1+\epsilon)\)-approximation algorithms for variations of the following problem with parameters \(k\) and \(\ell\). Given \(n\) univariate time series \(P\), each of complexity at most \(m\), we find \(k\) time series, not necessarily from \(P\), which we call cluster centers and which each have complexity at most \(\ell\), such that (a) the maximum distance of an element of \(P\) to its nearest cluster center or (b) the sum of these distances is minimized. Our algorithms have running time near-linear in the input size. To the best of our knowledge, our algorithms are the first clustering algorithms for the Frechet distance which achieve an approximation factor of \((1+\epsilon)\) or better. | ||
14:30–14:50 | Maike Buchin (Ruhr-Universität Bochum) |
A middle curve based on discrete Frechet distance |
Given a set of polygonal curves we seek to find a middle curve that represents the set of curves. We require that the middle curve consists of points of the input curves and that it minimizes the discrete Fréchet distance to the input curves. We present algorithms for three different variants of this problem: computing an ordered middle curve, computing a restricted middle curve, and computing an unordered middle curve. This is joint work with Hee-Kap Ahn, Helmut Alt, Eunjin Oh, Ludmila Scharf and Carola Wenk. | ||
14:50–15:10 | Roland Geraerts (Utrecht University) |
A computational model of human navigation: from theory to practice |
A huge challenge is to simulate tens of thousands of pedestrians in real-time where they pro-actively and realistically avoid collisions with each other and with obstacles present in their environment. We present a five-level framework which efficiently simulates realistic behavior. In this framework, we have implemented many algorithms that heavily rely on computational geometry, including 2D segment Voronoi diagrams, dynamic updates of these diagrams, nearest-neighbor search, visibility polygons and graphs, shortest paths with clearance, short paths in weighted regions, and their extensions to multi-layered environments such as a train station. We will discuss experiments with these algorithms, and discuss their practical implementations. |
||
15:10–15:20 | Short break | |
15:20–15:40 | Bodo Manthey (Universiteit Twente) |
Smoothed Analysis of the 2-Opt Heuristic for the Traveling Salesman Problem |
The 2-opt heuristic is a very simple local search heuristic for the traveling salesman problem. While it usually converges quickly in practice to a good solution, its worst-case running-time and approximation ratio are poor. In order to explain the practical performance of 2-opt, we use smoothed analysis: an adversary places \(n\) points arbitrarily, and then the points are perturbed by adding independent Gaussian random variables of standard deviation \(\sigma\). We show that the maximum expected running-time that the adversary can achieve is bounded by a polynomial in \(n\) and \(1/\sigma\) and that the maximum expected approximation ratio is \(O(\log(1/\sigma))\). |
||
15:40–16:00 | Sandor Kisfaludi-Bak (TU Eindhoven) |
Navigating graphs with \(\log n + O(\log \log n)\) bit addresses |
Greedy routing is a simple greedy algorithm that tries to deliver a message along the edges of a graph, whose vertices are associated with a (semi-)metric space. A greedy routing scheme defines the space and the embedding of the vertex set so that message delivery is guaranteed between any source-destination pair. We provide such a scheme for general undirected graphs which uses only \(\log n + O(\log \log n)\) bit labels on each vertex, improving the previously known best label size of \(3 \log n\) bits. The talk is based on joint work with Zoltan Kiraly. | ||
16:00–16:20 | Coffee break | |
16:20–16:40 | Aurélien Ooms (Université Libre de Bruxelles) |
A geometric approach to k-SUM |
The k-SUM problem consists of deciding, given a set of numbers, whether there exists a k-subset whose sum is zero. We show that for any \(\epsilon>0\), there exists an \(\epsilon n\)-linear decision tree of depth at most \(O(n^{3} polylog n)\) solving the k-SUM problem for \(k=O(1)\). We also show that the algorithm can be modified to perform \(o(n^4)\) queries, each of which is \(o(n)\)-linear.
The results are achieved by streamlining a cutting-based algorithm by Meiser for point location in arrangements of hyperplanes, and combining with a simple blocking scheme. While it has been known for long that constant-degree polynomial running times are achievable by nonuniform algorithms for the knapsack problem, this is the first thorough specific treatment for the k-SUM problem. |
||
16:40–17:00 | Ingo van Duijn (Aarhus University) |
Applications of incidence bounds in point covering problems |
In the Line Cover problem a set of n points (in any dimension) is given and the task is to cover the points using at most k lines. We present an algorithm that solves the problem in \(O((ck/log k)^k)\) for a universal constant c, whereas the previous fastest known algorithm solves the problem in \(O((k/1.35)^k)\). The main idea of our algorithm is to try to cover rich lines first (lines covering many points). By the Szemerédi-Trotter theorem incidence bound, we can bound the number of rich lines, giving a low branching in the initial phase of the algorithm. When the incidence bounds are not tight enough to guarantee good branching behaviour, we have few enough points to efficiently switch to a base case algorithm running in \(O(2^n)\). | ||
17:00–17:20 | Ken Arroyo Ohori (TU Delft) |
Constructing an n-dimensional topological representation from a soup of (n-1)-dimensional faces |
An intuitive way to incrementally build an n-dimensional object is to describe it based on a set of (n-1)-dimensional faces that form a cellular decomposition of its boundary. For many applications, it is however required to ‘stitch’ these faces together by finding their common ridges, i.e. the (n-2)-cells along which they must be joined. The talk will present an efficient method to do this based on quick isomorphism tests using signatures combined with a simple index of all vertices. It has been tested on 3D and 4D objects and is orders of magnitude faster than the naïve (brute force) approach. | ||
17:20–17:40 | Luis Barba (Université Libre de Bruxelles) |
Dynamic Graph Coloring |
In this talk, we study the number of vertex recolorings that an algorithms needs to perform in order to maintain a proper coloring of a \(C\)-colorable graph under four types of update: (i) delete an edge; (ii) insert an edge; (iii) delete a vertex and all incident edges; or (iv) insert a vertex with a set of incident edges. We assume that all updates keep the graph \(C\)-colorable and that \(N\) is the maximum number of vertices in the graph at any point in time.
We present two algorithms to maintain an approximation of the optimal coloring that, together, present a trade-off between the number of recolorings and the number of colors used to color the graph: For any \(d>0\), the first algorithm maintains a proper \(O(C d N^{1/d})\)-coloring and recolors at most \(O(d)\) vertices per update. The second maintains an \(O(C d)\)-coloring using \(O(dN^{1/d})\) recolorings per update. Both algorithms achieve the same asymptotic performance when \(d = \log N\). Moreover, we show that for any algorithm that maintains a \(c\)-coloring of a \(2\)-colorable graph on \(N\) vertices during a sequence of \(m\) updates, there is a sequence of updates that forces the algorithm to recolor at least \(\Omega(m\cdot N^\frac{2}{c(c-1)})\) vertices. This is joint work with Jean Cardinal, Matias Korman, Stefan Langerman, André van Renssen, Marcel Roeloffzen and Sander Verdonschot. |
||
17:40–... | Borrel | |
Move downtown for dinner around 18:00 | ||
18:30–... | Dinner at restaurant Griftpark 1 |