

Keynote speakersPierre Alquier (workshop on Combinatorics, Geometry and Statistics) Gilles Bertrand (workshop on Digital Topology and Mathematical Morphology) Alexandre Xavier Falcão (main DGCI event) Tat Yung Kong (workshop on Digital Topology and Mathematical Morphology) Longin Jan Latecki (main DGCI event) János Pach (main DGCI event) Dömötör Pálvölgyi (workshop on Combinatorics, Geometry and Statistics) Philippe Salembier (workshop on Digital Topology and Mathematical Morphology) Csaba D. Tóth (workshop on Combinatorics, Geometry and Statistics) Main DGCI Event
University of Campinas, Brazil "The role of optimum connectivity in image segmentation: Can the algotirhm learn object information during the process?" Abstract The use of machine learning methods for image segmentation has been actively investigated. Although these approaches can learn shape and texture models from previous examples of a given object, they can only provide its location and, with some user input, its approximated delineation in a new image. On the other hand, their use combined with methods that exploit optimum connectivity for object delineation could be very effective. After illustrating the above statement, we will focus on the role of optimum connectivity in image segmentation by presenting ways to incorporate object information in the connectivity function and the advantages of learning that information during segmentation. The framework of choice is the Image Foresting Transform (IFT), which has been successfully used to devise connectivitybased operators for image filtering, shape representation, data clustering, pattern classification, and image segmentation. In the IFT framework, an image is interpreted as a graph in which the nodes may be pixels, superpixels, or any other element from the image domain, each one represented by a point in some ndimensional feature space. The arcs of the graph must satisfy some adjacency relation, which may be defined in the image domain and/or feature space. The IFT uses Dijkstra's algorithm, with more general connectivity functions and multiple sources, to transform the image graph into a spanning forest rooted at the minima of the resulting connectivity map, then reducing the image operator to a local processing of the forest's attributes (optimum paths, their costs, and root labels). The lecture will cover the recent advances in the design of connectivity functions for superpixel and object delineation, the role of optimum connectivity in data clustering and semisupervised classification, and their use to extract global object information from markers and superpixels. In this part of the lecture, we will focus on IFTbased object delineation from competing seeds, in which one optimumpath tree grows from each seed and the object is defined by the forest rooted at its internal seeds. It will then show the advantages of extracting local object information from the spanning trees during their growing process for dynamic arcweight estimation. Finally, we will discuss open problems related to connectivitybased object delineation. Biography Alexandre Xavier Falcao is full professor at the Institute of Computing (IC), University of Campinas (Unicamp). He currently holds a research productivity fellowship (level 1B) from the Brazilian National Council for Scientific and Technological Development (CNPq) and serves as Senior Area Editor of the IEEE Signal Processing Letters, coordinator of the Computer Science area at FAPESP, member of the editorial board of the journal Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, and as member of the program committee of several conferences (e.g., SPIE on Medical Imaging and SIBGRAPI). He received a B.Sc. in Electrical Engineering from the Federal University of Pernambuco in 1988, and has worked in biomedical image processing, visualization and analysis since 1991. In 1993, he received a M.Sc. in Electrical Engineering from the University of Campinas. During 19941996, he worked with the Medical Image Processing Group at the Department of Radiology, University of Pennsylvania, USA, on interactive image segmentation for his doctorate. He got his doctorate in Electrical Engineering from the University of Campinas in 1996. In 1997, he worked in a project for Globo TV at a research center, CPqDTELEBRAS in Campinas, developing methods for video quality assessment. His experience as professor of Computer Science and Engineering started in 1998 at Unicamp. At ICUnicamp, he was Associate Director (20062007) and Coordinator of the PostGraduation Program (20092011). In 20112012, he spent oneyear sabbatical at the Robert W. Holley Center for Agriculture and Health (USDA, Cornell University), EUA, working on image analysis applied to plant biology. He has research productivity fellowships from CNPq since 1998. Among several awards, it is worth mentioning two Unicamp inventor awards at the category "License Technology" (2011 and 2012), three awards of academic excellence (2006, 2011, 2016) from ICUnicamp, one award of academic recognition "Zeferino Vaz" from Unicamp (2014), and one the best paper award in the year of 2012 from the journal Pattern Recognition (received at Stockholm, Sweden, during the conference ICPR 2014). His research work aims at computational models to learn and interpret the semantic content of images in the domain of several applications. His research interests include image/video processing, visualization, and analysis; graph algorithms and dynamic programming; image annotation, organization, and retrieval; machine learning and pattern recognition; and image analysis applications in Biology, Medicine, Biometrics, Geology, and Agriculture.
Longin Jan Latecki
"Computing maximum weight cliques with local search on large graphs with application to common object discovery" Abstract Many practical problems in various application areas like bioinformatics, scheduling, planning and computer vision can be formulated as combinatorial optimization problems. There are two classes of algorithm for finding satisfying solutions for such problems: complete (or exact) algorithms and incomplete algorithms. Complete algorithms guarantee the optimality of the solutions, but may fail to find a good solution within reasonable time for large instances. On the other hand, incomplete algorithms, often known as local search algorithms, cannot guarantee optimality, but can usually find optimal or nearoptimal solutions within reasonable time, and thus are very useful in realworld applications. Local search algorithms start from some given solution and try to find a better solution in an appropriately defined neighborhood of the current solution. If a better solution is found, it replaces the current solution and the local search is continued from there. Local search is a widely accepted approach to solving hard combinatorial optimization problems. We focus on maximum weight clique (MWC) problem, which is the problem of finding a clique with the greatest total weight on a undirected weighted graph. Due to the NPhardness of the MWC problem, our goal is to find a good clique within a reasonable time limit in the local search framework. This problem exists in many realworld AI applications like common object discovery, image segmentation, and clustering aggregation. As our test application we consider the task of discovering the common objects in a set of images or in videos. Initially, object candidates are generated in each image and an undirected weighted graph is constructed over all the candidates. Each candidate serves as a node in the graph while the weight of the edge describes the similarity between the corresponding pair of candidates. The MWC corresponds to a set of object candidates sharing maximum mutual similarity, and each node in the MWC represents a discovered common object across the images. Biography Dr. Longin Jan Latecki is a professor of computer science at Temple University, Philadelphia. His main research interests include shape representation and similarity, object detection and recognition in images, robot perception, machine learning, and digital geometry. He received his PhD degree in 1992 and the habilitation in 1996, both from University of Hamburg, Germany. He has published over 250 research papers and books (hindex = 46). He is an editorial board member of journals Pattern Recognition and Computer Vision and Image Understanding. He received 2010 College of Science and Technology Research Excellence Award at Temple University, the annual Pattern Recognition Society Award together with Azriel Rosenfeld for the best article published in the journal Pattern Recognition in 1998. He is also the recipient of the 2000 Olympus Prize, the main annual award, from the German Society for Pattern Recognition (DAGM).
János Pach
"The blessing of low dimensionality" Abstract We discuss some notoriously hard combinatorial problems for large classes of graphs and hypergraphs arising in geometric, algebraic, and practical applications. These structures escape the “curse of dimensionality”: they can be embedded in a boundeddimensional space, or they have small VCdimension or a short algebraic description. What are the advantages of low dimensionality? (a) With the help of suitable topological and algebraic separator theorems, large families of geometric objects embedded in a fixeddimensional space can be split into subfamilies of roughly the same size, and then the smaller families can be analyzed recursively. (b) Geometric objects in space can be compared by a number of naturally defined partial orders, so that one can utilize the theory of partially ordered sets. (c) Graphs and hypergraphs of bounded VCdimension admit very small epsilonnets and can be particularly well approximated by random sampling. (d) If the description complexity of a family of geometric objects is small, then the “combinatorial complexity” (number of various dimensional faces) of their arrangements is also small. This typically guarantees the existence of efficient algorithms to visualize, describe, control, and manipulate these arrangements. Biography János Pach is Chair of Combinatorial Geometry at EPFL, Lausanne and Research Adviser at Rényi Institute, Budapest. His main fields of interest are discrete and computational geometry, convexity, and combinatorics. He wrote more than 300 research papers. His books, ``Research Problems in Discrete Geometry'' (with Brass and Moser) and ``Combinatorial Geometry" (with Agarwal) were translated into Japanese, Russian, and Chinese. He is coeditorinchief of Discrete & Computational Geometry and serves on the editorial boards of ten other professional journals. For his scientific achievements, he received several prices, including the Lester R. Ford Award from the Mathematical Association of America (1990), the Rényi Prize and the Academy Award from the Hungarian Academy of Sciences (1993, 1998). He was elected ACM Fellow (2011), member of Academia Europeae (2014), and AMS Fellow (2015). He was invited speaker at the International Congress of Mathematicians in Soeul (2014).
Workshop on Discrete Topology and Mathematical Morphology, in honor of the retirement of Gilles BertrandGilles Bertrand
"An axiomatic approach to combinatorial topology" Abstract We present an attempt to build an axiomatic approach to combinatorial topology. The purpose is to introduce a ground set of definitions which allow us to describe structures and to derive basic constructions relative to that field. A main ingredient of our project is the notion of completion. Completions are inductive properties which may be expressed in a declarative way and which may be combined. Intuitively, a completion may be seen as a rewriting rule acting on sets. In this talk, we first introduce two completions in order to define a remarkable collection of acyclic simplicial complexes, namely the collection of dendrites. We give few basic properties of this collection. Also, we present a theorem which shows the equivalence between this collection and the one made of all simplicial complexes which are acyclic in the sense of homology. Then, we introduce several completions for describing dyads. A dyad is a pair of complexes which are, in a certain sense, linked by a "relative topology". We give some basic properties of dyads, and introduce a second set of axioms for relative dendrites. We establish a theorem which provides a link between dyads, relative dendrites, and dendrites. At last, using again the framework of completions, we propose an extension of simple homotopy by considering homotopic pairs. Intuitively, a homotopic pair is a couple of objects (X,Y) such that X is included in Y and (X,Y) may be transformed to a trivial couple by simple homotopic deformations that keep X inside Y. Thus, these objects are linked by a "relative homotopy relation". Our main result is a theorem that makes clear the link between homotopic pairs and dyads. Thus, we prove that, in the unified framework of completions, it is possible to handle notions relative to both homotopy and homology. Biography Gilles Bertrand received his Ingénieur’s degree from the Ecole Centrale des Arts et Manufactures in 1976. Until 1983, he was with the ThomsonCSF company where he designed image processing systems for aeronautical applications. He received his Ph.D from the Ecole Centrale in 1986. Until 2018, he was teaching and doing research with the Computer Science and Telecommunication Department of ESIEEParis and with the Laboratoire d’Informatique GaspardMonge of Université ParisEst. His research interests are digital and combinatorial topology, mathematical morphology, image analysis.
Tat Yung Kong "Hereditarily homologysimple sets and homology critical kernels of binary images on sets of convex polytopes" Abstract (The pdf file is available here.) We define a binary image to be a mapping I : X → {0, 1} in which X is a set of nonempty sets (e.g., a set of cubical voxels) in a Euclidean space and I^{−1}[1] is finite: We say I is a binary image on X and call each element of I^{−1}[1] a 1 of I. For any set S of 1s of I we use the term Sintersection to mean a nonempty set that is the intersection of a nonempty subset of S. Thus an Sintersection is either an element of S or a nonempty intersection of two or more elements of S. Let D be any set of 1s of a binary image I. If the inclusion U(I^{−1}[1] \ D) → U I^{−1}[1] induces homology isomorphisms in all dimensions, then we say D is homologysimple in I. If every subset of D is homologysimple in I, then we say D is hereditarily homologysimple in I. A local characterization of hereditarily homologysimple sets can be useful for designing parallel thinning algorithms or for checking the topological soundness of proposed parallel thinning algorithms. When I is a binary image on the grid cells of a Cartesian grid of dimension ≤ 4, it can be deduced from results of Bertrand and Couprie that the sets D of 1s that are hereditarily homologysimple in I can be locally characterized as follows in terms of Bertrand's concept of critical kernel:
After discussing this characterization and some of its consequences, we will explain how we can generalize it to a local characterization of hereditarily homologysimple sets of 1s in any binary image I on an arbitrary set of convex polytopes of any dimension. To do this, we need only replace ``I's critical kernel'' in the above characterization with ``I's homology critical kernel''. We define the latter to be the set of all I^{−1}[1]intersections c for which the intersection of c with the union of the 1s of I that do not contain c either is empty or is disconnected or has nontrivial homology in some positive dimension. Biography T. Yung Kong is Professor of Computer Science at Queens College of the City University of New York. Much of his published work has dealt with topological problems relating to binary images, especially problems of digital topology. He has also worked on some topics in other research areas (such as the theory of fuzzy segmentation and algorithms for convex feasibility in recent years). A member of the Computer Science faculty at Queens College since 1989, he previously held faculty positions in computer science at Ohio University and in mathematics at City College of the City University of New York. In 200001 he was a Visiting Professor in the Computer and Information Sciences Department at Temple University. He has a B.A. in mathematics and an M.Math. from the University of Cambridge (earned in 1981 and 1982). In 198285 he was a doctoral student at the Oxford University Computing Laboratory, where he pursued research on digital topology. He received a D.Phil. from the University of Oxford in 1986.
Philippe Salembier
"Processing Radar Images with Hierarchical RegionBased Representations and Graph Signal Processing Tools" Abstract Biography Philippe Salembier received an engineering degree from the Ecole Polytechnique, Paris, France, in 1983 and an engineering degree from the Ecole Nationale Supérieure des Télécommunications, Paris, France, in 1985. From 1985 to 1989, he worked at Laboratoires d'Electronique Philips, LimeilBrevannes, France, in the fields of digital communications and signal processing for HDTV. In 1989, he joined the Signal Processing Laboratory of the Swiss Federal Institute of Technology in Lausanne, Switzerland, to work on image processing and received a PhD in 1991. At the beginning of 1992, after a stay at the Harvard Robotics Laboratory as a Postdoctoral Fellow, he joined the Technical University of Catalonia, Barcelona, Spain, where he is currently professor lecturing on the area of digital signal and image processing.
Workshop on Combinatorics, Geometry and StatisticsPierre Alquier
"Theoretical Guarantees for Approximate Bayesian Inference" Abstract While Bayesian methods are extremely popular in statistics and machine learning, their application to massive datasets is often challenging, when possible at all. Indeed, the classical MCMC algorithms targeting the exact posterior are prohibitively slow when both the model dimension and the sample size are large or when the likelihood is not tractable. Fast algorithms were proposed, at the price of targeting approximations of the posterior. In this talk, I will discuss two approaches. Biography Pierre Alquier obtained his PhD from Univerisité Pierre et Marie Curie (2006). He worked as a lecturer in Université Paris Diderot (20072012), and then at University College Dublin (20122014). Currently he is a professor in statistics, ENSAE ParisTech (2014now). His research interests are high dimensional statistics and machine learning, PACBayesian bounds. More recently, he has focused on the theoretical analysis of computational methods in Bayesian statistics such as Monte Carlo methods and variational inference.
Dömötör Pálvölgyi
"News about the chromatic number of the plane" Abstract At least how many colors do we need to color all the points of the plane such that every pair of points at unit distance receive different colors? This question is known as the HadwigerNelson problem and until one year ago it was only known that 4 colors are needed and 7 are enough. Then in April 2018 gerontologist de Grey showed in a breakthrough result that 4 colors are not enough. His proof was computer assisted, and a collaborative Polymath project was started to find a humanverifiable proof. I will give a short introduction to the classical results, then talk about the current ongoing research with plenty of open problems. Biography Dömötör Pálvölgyi obtained his PhD degree in mathematics from the Ecole Polytechnique Fédérale de Lausanne (EPFL) in 2010, after which he worked in various positions at the Mathematical Institute of the Eötvös Loránd University (ELTE Budapest). He was also a visiting professor at IIT Delhi (2012 spring) and a research fellow at the University of Cambridge (20152017). He won the Grünwald Géza medal of the János Bolyai Mathematical Society (2011), the János Bolyai Scholarship of the Hungarian Academy of Sciences (2012), the Postdoctoral Fellowship of the Hungarian Scientific Research Fund (2012), the Marie SklodowskaCurie Fellowship (IFEF) of the European Commission (2015) and the Momentum (Lendület) grant of the Hungarian Academy of Sciences (2017).
Csaba D. Tóth "Online Unit Clustering and Covering in Euclidean Space" Abstract Given a set of n points in a metric space, that arrive one by one, the Unit Clustering problem consists of partitioning the points into the minimum number of clusters (subsets) of diameter at most one; while the Unit Covering problem consists of covering all points by the minimum number of balls of unit radius. In this talk, we explore how the competitive ratio of online algorithms depend on the diemension in Euclidean spaces. Under the L_{∞} norm, we show that the competitive ratio of any online algorithm (deterministic or randomized) for Unit Clustering is Ω(d). We also give a randomized online algorithm with competitive ratio O(d^{2}) for Unit Clustering of integer points. The competitive ratio of any deterministic online algorithm for Unit Covering under L_{∞} is at least 2^{d}; and this ratio is the best possible. Under the L_{2} norm, we describe an online deterministic algorithm with competitive ratio O(1.321^{d}) for Unit Covering, improving on the earlier record by an exponential factor; and give a lower bound of d+1 for the competitive ratio of any algorithm for d≥1 (Joint work with Adrian Dumitrescu and Anirban Ghosh.) Bibliography Csaba D. Tóth is a professor of mathematics at California State University Northridge, located in the city of Los Angeles, and a visiting professor of computer science at Tufts University in the Boston metro area. He is the author of more than 90 papers in discrete and computational geometry, and an editor of the 3rd edition of the Handbook of Discrete and Computational Geometry. His main research interests are in hierarchical subdivisions in lowdimensional spaces, topological graph theory, and geometric optimization. 