SWAT 2012 is co-located with CPM 2012. A registration to SWAT 2012 gives you access to all technical sessions and keynote talks in both SWAT 2012 and CPM 2012.
SWAT 2012 | CPM 2012 | ||
Tuesday | 3 July 2012 | invited talk | |
contributed talks | |||
Wednesday | 4 July 2012 | invited talk | |
contributed talks | contributed talks | ||
Thursday | 5 July 2012 | invited talk | |
contributed talks | contributed talks | ||
Friday | 6 July 2012 | invited talk | |
contributed talks |
For the conference proceedings, see LNCS volume 7357 (SWAT 2012) and LNCS volume 7354 (CPM 2012).
The joint CPM and SWAT programme is also available as a PDF file (updated on 27 June 2012) and in Google Calendar (html, ical, xml).
Abstract. Understanding complex disease is one of today’s grand challenges. In spite of the rapid advance of biotechnology, disease understanding is still very limited and further computational tools for disease-related data analysis are in dire need. In this talk I will describe some of the approaches that we are developing for these challenges. I will describe methods for utilizing expression profiles of sick and healthy individuals to identify pathways dysregulated in the disease, methods for integrated analysis for expression and protein interactions, and methods for regulatory motif discovery. If time allows, I’ll discuss methods for analysis of genome aberrations in cancer. The utility of the methods will be demonstrated on biological examples.
Biography. Prof. Ron Shamir leads the Computational Genomics group at the Blavatnik School of Computer Science, Tel Aviv University (TAU). He is the head of the Edmond J. Safra Center for Bioinformatics at TAU and holds the Raymond and Beverly Sackler Chair in Bioinformatics. He develops novel algorithmic methods in Bioinformatics and Systems Biology. His research interests include gene expression analysis, modeling and dissection of molecular networks, gene regulation and cancer genomics. Methods and software tools developed by Shamir’s group are in use by many laboratories around the world.
Shamir received a BSc in Mathematics and Physics from the Hebrew University, and a PhD in Operations Research from UC Berkeley in 1984. He is on the faculty of TAU since 1987. He has published over 220 scientific works, including 17 books and edited volumes, and has supervised 45 graduate students. He is on the editorial board of eleven scientific journals and series, and was on the RECOMB Conference series steering committee for thirteen years. In 2000 he founded the Bioinformatics undergraduate program at TAU. He co-founded the Israeli Society of Bioinformatics and Computational Biology, and was society president in 2004-2006. He is a recipient of the 2011 Landau Prize in Bioinformatics.
Abstract. The wavelet tree is a versatile data structure that serves a number of purposes, from string processing to geometry and more. It can be regarded as a device that represents a sequence, or that represents a grid of points, or that encodes a reordering of a sequence of elements. In addition, its space adapts to various compressibility criteria of the the data it encodes, enabling compressed representations. New competitive solutions to fundamental problems, based on wavelet trees, are appearing every year. In this talk I will give an overview of the surprising number of applications in which I have found wavelet trees extremely useful: basic and weighted point grids, sets of rectangles, strings, permutations, binary relations, graphs, inverted indexes, document retrieval indexes, full-text indexes, XML indexes, and general numeric sequences.
Biography. Gonzalo Navarro earned his PhD in Computer Science in 1998 from the University of Chile, where he is currently Full Professor. His areas of interest include algorithms and data structures, text searching, compression, and metric space searching.
Navarro has headed the Center for Web Research (a Millennium Nucleus funded by the Chilean government); RIBIDI (a Latin American project funded by CYTED); a project funded by Yahoo! Research; and other personal-scale projects. He has participated in several research projects such as an ECOS/CONICYT (Chile-France cooperation), AMYRI (a CYTED project), and ICDB (a Millennium Institute for bioinformatics).
Prof. Navarro has been PC (co-)chair of several conferences, including SPIRE 2001, SPIRE 2005 and SIGIR 2005 Posters. He co-created conference SISAP in 2008, which he co-chaired in 2008 and 2012. He is a member of the Steering Committee of LATIN and SISAP conferences, and of the Editorial Board of Information Retrieval journal and of ACM Journal of Experimental Algorithmics. He has been PC member of around 50 national and international conferences and reviewer for around 40 international journals. He has given around 50 invited talks, including 8 keynote talks at international conferences.
Navarro has coauthored a book published by Cambridge University Press, 15 book chapters, almost 100 journal papers and 175 international conference papers.
Abstract. The watchman route problem is a classic optimization problem in computational geometry: Determine a shortest path/tour for an observer to be able to view all points within a given geometric domain. The problem combines aspects of the traveling salesperson problem and the set cover problem. While some special cases have polynomial-time exact solutions, most versions of the problem are NP-hard, so attention focuses on approximation algorithms. The problem comes in many varieties, depending on the nature of the domain being searched, assumptions about the searcher(s), and the objective function. We briefly survey the research area of visibility coverage tours and describe some recent advances in approximation algorithms.
Biography. Prof. Joe Mitchell leads the Computational Geometry group at Stony Brook University. His research interests are broadly in the area of computational geometry and algorithms, applied to problems in networks, transportation, manufacturing, graphics, visualization, and geographic information systems. His research group collaborates with industry and has developed software for geometric algorithms used in academic research and licensed to numerous companies.
Mitchell received a BS (Physics and Applied Mathematics) and an MS (Mathematics) from Carnegie Mellon University, and a PhD (Operations Research) from Stanford University in 1986. He served on the faculty of Cornell University and then joined Stony Brook University, where he is now Professor of Applied Mathematics and Statistics and Research Professor of Computer Science. Mitchell has received numerous teaching awards and various research awards, including ACM Fellow, 2010 Gödel Prize, NSF Presidential Young Investigator, Fulbright Scholar, and the President's Award for Excellence in Scholarship and Creative Activities. He serves on editorial boards of Algorithmica, the Journal of Graph Algorithms and Applications, and all four computational geometry journals. He serves on the Steering Committee for the Symposium on Computational Geometry and was its founding Chair.
Abstract. The title of my presentation is a motto originally coined in urban design about 100 years ago, but used in various different contexts since. The motto can also be applied to many areas in computer science. For instance, in computational game theory, each agent tries to maximize his/her own (local) benefit, and we analyze the (global) social welfare. Also, computer architecture is designed for locality of reference. Recently, the distributed computing community has made tremendous progress towards understanding the complexity of distributed message passing algorithms. In networks, a rich selection of upper and lower bounds regarding how much time it takes to solve or approximate a problem have been established; we now have a good understanding how local actions and global goal influence each other. This distributed complexity theory may ultimately help to give the "think global, act local" motto a mathematical foundation. In my talk I will introduce the message passing model, present a few selected results, mention prominent open problems, and discuss some of the most exciting future research directions. Upcoming applications may not necessarily lie in computer science but rather in other areas that deal with networked systems, e.g. biology or social sciences.
Biography. Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzerland. He received his doctorate in Computer Science in 1998 from ETH Zurich. From 1999 to 2001 he was in the USA, first at Brown University in Providence, RI, then at Microsoft Research in Redmond, WA. He then returned to ETH Zurich, originally as an assistant professor at the Computer Science Department. Roger Wattenhofer's research interests are distributed equally in both algorithmic theory and practical systems. Just three days before SWAT he received the 2012 Prize for Innovation in Distributed Computing.
A new class of visibility problems based on the notion of α-visibility is introduced. A segment t in a polygonal scene S is said to be α-visible from a point p, if and only if there exists a triangle pab such that (a) the angle at p equals α, (b) the segment ab ⊆ t, and (c) the triangle pab does not intersect any object of S in its interior, i.e., triangle pab is empty.
In this model of visibility, we study the classical variants of point visibility, weak and complete segment visibility, and the construction of the visibility graph. We also investigate the natural query versions of these problems and problem instances in which α is either fixed or specified at the query time.
Given a set L of non-parallel lines, a watchman route for L is a closed curve contained in the union of the lines in L such that every point on any line is visible (along a line) from at least one point of the route; similarly, we define a watchman route for a connected set S of line segments. The watchman route problem for a given set of lines or line segments is to find a shortest watchman route for the input set, and these problems are natural special cases of the watchman route problem in multiply connected polygonal domains.
In this paper, we show that the problem of computing a shortest watchman route for a set of n non-parallel lines in the plane is polynomially tractable, while it becomes NP-hard in 3D. Then, we reprove NP-hardness of this problem for line segments in the plane and provide a polynomial-time approximation algorithm with ratio O(log3 n). Additionally, we consider some special cases of the watchman route problem on line segments, for which we are able to provide improved approximation or exact algorithms.
We investigate higher-order Voronoi diagrams in the city metric. This metric is induced by quickest paths in the L1 metric in the presence of an accelerating transportation network of axis-parallel line segments. For the structural complexity of kth-order city Voronoi diagrams of n point sites, we show an upper bound of O(k(n − k) + kc) and a lower bound of Ω(n + kc), where c is the complexity of the transportation network. This is quite different from the bound O(k(n − k)) in the Euclidean metric. For the special case where k = n − 1 the complexity in the Euclidean metric is O(n), while that in the city metric is Θ(nc).
Furthermore, we develop an O(k2(n + c) log n)-time iterative algorithm to compute the kth-order city Voronoi diagram and an O(nc log2(n + c) log n)-time divide-and-conquer algorithm to compute the farthest-site city Voronoi diagram.
We construct a new proximity graph, called the Pie Delaunay graph, on a set of n points which is a super graph of YG and Euclidean minimum spanning tree (EMST). We efficiently maintain the PDG where the points are moving in the plane. We use the kinetic PDG to create a kinetic data structure (KDS) for maintenance of the YG and the EMST on a set of n moving points in 2-dimensional space. Assuming x and y coordinates of the points are defined by algebraic functions of at most degree s, the structure uses O(n) space, O(n log n) preprocessing time, and processes O(n2λ2s+2(sn) βs+2(n)) events for the YG and O(n2λ2s+2(sn)) events for the EMST, each in O(log2 n) time. Here, λs(n) = nβs(n) is the maximum length of Davenport-Schinzel sequences of order s on n symbols.
Our KDS processes nearly cubic events for the EMST which improves the previous bound O(n4) by Rahmati et al.
In the Connected Vertex Cover problem we are given an undirected graph G together with an integer k and we are to find a subset of vertices X of size at most k, such that X contains at least one end-point of each edge and moreover X induces a connected subgraph. For this problem we present a deterministic algorithm running in O(2k nO(1)) time and polynomial space, improving over previously best O(2.4882k nO(1)) deterministic algorithm and O(2k nO(1)) randomized algorithm. Furthermore, when usage of exponential space is allowed, we present an O(2k k(n + m)) time algorithm that solves a more general variant with arbitrary real weights.
Finally, we show that in O(2k k(n + m)) time and O(2k k) space one can count the number of connected vertex covers of size at most k, which can not be improved to O((2 − ε)k nO(1)) for any ε > 0 under the Strong Exponential Time Hypothesis, as shown by Cygan et al. [CCC’12].
An undirected graph is said to be split if its vertex set can be partitioned into two sets such that the subgraph induced on one of them is a complete graph and the subgraph induced on the other is an independent set. We study the problem of deleting the minimum number of vertices or edges from a given input graph so that the resulting graph is split. The problem is well known to be NP-Complete from a classical result of Lewis and Yannakakis (JCSS 1980). We address the problem in the parameterized complexity framework.
As this graph class has a finite number of forbidden sets, the problem is known to be fixed-parameter tractable by a general result of Cai (IPL 1996). However, unlike the closely related class of Chordal graphs, deletion problems related to split graphs aren't well studied. We initiate a systematic study and give efficient fixed-parameter algorithms and polynomial sized kernels for the problem. More precisely,
We obtain our algorithms by proving some interesting structural properties of split graphs, which can be of independent interest. In addition, we note that our algorithm for Split Edge Deletion adds to the small number of subexponential parameterized algorithms not obtained through bidimensionality, and in general graphs.
Given an input graph G and an integer k, the parameterized K4-minor cover problem asks whether there is a set S of at most k vertices whose deletion results in a K4-minor-free graph, or equivalently in a graph of treewidth at most 2. This problem is inspired by two well-studied parameterized vertex deletion problems, Vertex Cover and Feedback Vertex Set, which can also be expressed as Treewidth-t Vertex Deletion problems: t = 0 for Vertex Cover and t = 1 for Feedback Vertex Set.
While a single-exponential FPT algorithm has been known for a long time for Vertex Cover, such an algorithm for Feedback Vertex Set was devised comparatively recently. While it is known to be unlikely that Treewidth-t Vertex Deletion can be solved in time co(k) ∙ nO(1), it was open whether the K4-minor cover could be solved in single-exponential FPT time, i.e. in ck ∙ nO(1) time for some constant c. This paper answers this question in the affirmative.
The biclique problem asks, given a graph G and a parameter k, whether G has a complete bipartite subgraph of k vertices in each part (a biclique of order k). Fixed-parameter tractability of this problem is a longstanding open question in parameterized complexity that received a lot of attention from the community. In this paper we consider a restricted version of this problem by introducing an additional parameter s and assuming that G does not have induced (i.e. chordless) paths of length s.
We prove that under this parameterization the problem becomes fixed-parameter linear. The main tool in our proof is a Ramsey-type theorem stating that a graph with a long (not necessarily induced) path contains either a long induced path or a large biclique.
Let K be a simplicial complex and g the rank of its p-th homology group Hp(K) defined with Z2 coefficients. We show that we can compute a basis H of Hp(K) and annotate each p-simplex of K with a binary vector of length g with the following property: the annotations, summed over all p-simplices in any p-cycle z, provide the coordinate vector of the homology class [z] in the basis H. The basis and the annotations for all simplices can be computed in O(nω) time, where n is the size of K and ω < 2.376 is a quantity so that two n × n matrices can be multiplied in O(nω) time. The pre-computation of annotations permits answering queries about the independence or the triviality of p-cycles efficiently.
Using annotations of edges in 2-complexes, we derive better algorithms for computing optimal basis and optimal homologous cycles in 1-dimensional homology. Specifically, for computing an optimal basis of H1(K), we improve the time complexity known for the problem from O(n4) to O(nω + n2gω−1). Here n denotes the size of the 2-skeleton of K and g the rank of H1(K). Computing an optimal cycle homologous to a given 1-cycle is NP-hard even for surfaces and an algorithm taking O(2O(g)n log n) time is known for surfaces. We extend this algorithm to work with arbitrary 2-complexes in O(nω + 2O(g)n2 log n) time using annotations.
The coverage area of a directional antenna located at point p is a circular sector of angle α, whose orientation and radius can be adjusted. The interference at p, denoted I(p), is the number of antennas that cover p, and the interference of a communication graph G = (P, E) is I(G) = max { I(p) : p ∈ P }. In this paper we address the question in its title. That is, we study several variants of the following problem: What is the minimum interference I, such that for any set P of n points in the plane, representing transceivers equipped with a directional antenna of angle α, one can assign orientations and ranges to the points in P, so that the induced communication graph G is either connected or strongly connected and I(G) ≤ I.
In the asymmetric model (i.e., G is required to be strongly connected), we prove that I = O(1) for α < 2π/3, in contrast with I = Θ(log n) for α = 2π, proved by Korman [K11]. In the symmetric model (i.e., G is required to be connected), the situation is less clear. We show that I = Θ(n) for α < π/2, and prove that I = O(n1/2) for π/2 ≤ α ≤ 3π/2, by applying the Erdős-Szekeres theorem. The corresponding result for α = 2π is I = Θ(n1/2), proved by Halldórsson and Tokuyama [HT08].
As in [K11] and [HT08] who deal with the case α = 2π, in both models, we assign ranges that are bounded by some constant c, assuming that UDG(P) (i.e., the unit disk graph over P) is connected. Moreover, the O(n1/2) bound in the symmetric model reduces to O(∆1/2), where ∆ is the maximum degree in UDG(P).
Let S be a set of n points in d-space. A convex Steiner partition is a tiling of conv(S) with empty convex bodies. For every integer d, we show that S admits a convex Steiner partition with at most ⌈(n − 1)/d⌉ tiles. This bound is the best possible for affine independent points in the plane, and it is best possible apart from constant factors in every dimension d ≥ 3. We also give the first constant-factor approximation algorithm for computing a minimum Steiner convex partition of an affine independent point set in the plane. Determining the maximum possible volume of a single tile in a Steiner partition is equivalent to a famous problem of Danzer and Rogers. We give a (1 − ε)-approximation for the maximum volume of an empty convex body when S lies in the d-dimensional unit box [0,1]d.
We consider the following variant of the speed scaling problem introduced by Yao, Demers, and Shenker. We are given a set of jobs and we have a variable-speed processor to process them. The higher the processor speed, the higher the energy consumption. Each job is associated with its own release time, deadline, and processing volume. The objective is to find a feasible schedule that minimizes the energy consumption. Moreover, no preemption of jobs is allowed.
It is somehow surprising that the non-premptive version of the speed scaling problem has never been studied since Yao et al. introduced the problem back in 1995. Unlike the preemptive version that is known to be in P, the non-preemptive version is strongly NP-hard. In this work, we present a constant factor approximation algorithm for it. The main technical idea is to transform the problem into the unreleated machine scheduning problem with Lp-norm objective.
In this paper we consider a variant of the orthogonal range reporting problem when all points should be reported in the sorted order of their x-coordinates. We show that reporting two-dimensional points with this additional condition can be organized (almost) as efficiently as the standard range reporting.
Moreover, our results generalize and improve the previously known results for the orthogonal range successor problem and can be used to obtain better solutions for some stringology problems.
We consider dynamic subgraph connectivity problems for planar graphs. In this model there is a fixed underlying planar graph, where each edge and vertex is either "off" (failed) or "on" (recovered). We wish to answer connectivity queries with respect to the "on" subgraph.
The model has two natural variants, one in which there are d edge/vertex failures that precede all connectivity queries, and one in which failures/recoveries and queries are intermixed.
We present a d-failure connectivity oracle for planar graphs that processes any d edge/vertex failures in sort(d, n) time so that connectivity queries can be answered in pred(d, n) time. (Here sort and pred are the time for integer sorting and integer predecessor search over a subset of [n] of size d.) Our algorithm has two discrete parts. The first is an algorithm tailored to triconnected planar graphs. It makes use of Barnette's theorem, which states that every triconnected planar graph contains a degree-3 spanning tree.
The second part is a generic reduction from general (planar) graphs to triconnected (planar) graphs. We extend this algorithm to the subgraph connectivity model where edge/vertex failures (but no recoveries) are intermixed with connectivity queries. In triconnected planar graphs each failure and query is handled in O(log n) time (amortized), whereas in general planar graphs both bounds become O(log2 n).
We study the well-known frequent items problem in data streams from a competitive analysis point of view.
We consider the standard worst-case input model, as well as a weaker distributional adversarial setting. We are primarily interested in the single-slot memory case and for both models we give (asymptotically) tight bounds of Θ(N1/2) and Θ(N1/3) respectively, achieved by very simple and natural algorithms, where N is the stream's length. We also provide lower bounds, for both models, in the more general case of arbitrary memory sizes of k ≥ 1.
Assuming the AND-distillation conjecture, the Pathwidth problem of determining whether a given graph G has pathwidth at most k admits no polynomial kernelization with respect to k. The present work studies the existence of polynomial kernels for Pathwidth with respect to other, structural, parameters.
Our main result is that, unless NP ⊆ coNP/poly, Pathwidth admits no polynomial kernelization even when parameterized by the vertex deletion distance to a clique, by giving a cross-composition from Cutwidth. The cross-composition works also for Treewidth, improving over previous lower bounds by the present authors. For Pathwidth, our result rules out polynomial kernels with respect to the distance to various classes of polynomial-time solvable inputs, like interval or cluster graphs.
This leads to the question whether there are nontrivial structural parameters for which Pathwidth does admit a polynomial kernelization. To answer this, we give a collection of graph reduction rules that are safe for Pathwidth. We analyze the success of these results and obtain polynomial kernelizations with respect to the following parameters: the size of a vertex cover of the graph, the vertex deletion distance to a graph where each connected component is a star, and the vertex deletion distance to a graph where each connected component has at most c vertices.
This work further explores the applications of co-nondeterminism for showing kernelization lower bounds. The only known example excludes polynomial kernelizations for the RAMSEY problem of finding an independent set or a clique of at least k vertices in a given graph (Kratsch 2012, SODA). We study the more general problem of finding induced subgraphs on k vertices fulfilling some hereditary property Π, called Π-INDUCED SUBGRAPH. The problem is NP-hard for all non-trivial choices of Π by a classic result of Lewis and Yannakakis (JCSS 1980). The parameterized complexity of this problem was classified by Khot and Raman (TCS 2002) depending on the choice of Π. The interesting cases for kernelization are for Π containing all independent sets and all cliques, since the problem is trivial or W[1]-hard otherwise.
Our results are twofold. Regarding Π-INDUCED SUBGRAPH, we show that for a large choice of natural graph properties Π, including chordal, perfect, cluster, and cograph, there is no polynomial kernel with respect to k. This is established by two proofs: one using a co-nondeterministic variant of cross-composition and one by a polynomial parameter transformation from RAMSEY. The latter result is inspired by a second coNP-composition-based proof which is included in the full version.
Additionally, we show how to use improvement versions of NP-hard problems as source problems for lower bounds, without requiring their NP-hardness. E.g., for Π-INDUCED SUBGRAPH our compositions may assume existing solutions of size k − 1. We believe this to be useful for further lower bound proofs, since improvement versions simplify the construction of a disjunction (OR) of instances required in compositions. This adds a second way of using co-nondeterminism for lower bounds.
This paper investigates the number of quantum queries made to solve the problem of reconstructing an unknown string from its substrings in a certain query model. More concretely, the goal of the problem is to identify an unknown string S by making queries of the following form: "Is s a substring of S?", where s is a query string over the given alphabet. The number of queries required to identify the string S is the query complexity of this problem.
First we show a quantum algorithm that exactly identifies the string S with at most ¾N + o(N) queries, where N is the length of S. This contrasts sharply with the classical query complexity N. Our algorithm uses Skiena and Sundaram's classical algorithm and the Grover search as subroutines. To make them effectively work, we develop another subroutine that finds a string appearing only once in S, which may have an independent interest. We also prove two lower bounds. The first one is a general lower bound of Ω(N/log2 N), which means we cannot achieve a query complexity of O(N1−ε) for any constant ε. The other one claims that if we cannot use queries of length roughly between log N and 3 log N, then we cannot achieve a query complexity of any sublinear function in N.
On Thursday, 5 July 2012, we will have a cruise from the Market Square to Suomenlinna sea fortress, on Royal Line boats. The Market Square (map label B) is within a short walking distance from the conference venue. We will have the SWAT business meeting during the cruise.
In Suomenlinna, we will have the conference banquet in restaurant Walhalla (map label H). After the banquet, a boat will take us back to the Market Square.
Boarding for the cruise starts at 5:45pm, and the boat departs at 6:00pm. Be there on time. We will not wait for those who arrive late, for any reason.
If you miss our boat, you will miss the excursion. However, you can still arrive on time for the banquet: take the HSL ferry to Suomenlinna and follow the map for Walhalla.