Keynote speakers

Pierre Alquier (workshop on Combinatorics, Geometry and Statistics)
ENSAE, France

Gilles Bertrand (workshop on Digital Topology and Mathematical Morphology)
ESIEE Paris, France

Alexandre Xavier Falcão (main DGCI event)
UNICAMP, Brazil

Tat Yung Kong (workshop on Digital Topology and Mathematical Morphology)
City University of New York, USA

Longin Jan Latecki (main DGCI event)
Temple University, USA

János Pach (main DGCI event)
EPFL, Switzerland

Dömötör Pálvölgyi (workshop on Combinatorics, Geometry and Statistics)
Eötvös Loránd University, Hungary

Philippe Salembier (workshop on Digital Topology and Mathematical Morphology)
Universitat Politècnica de Catalunya, Spain

Csaba D. Tóth (workshop on Combinatorics, Geometry and Statistics)
CSUN and Tufts University, USA

Main DGCI Event

Alexandre Xavier Falcão

University of Campinas, Brazil

falcao_3.jpg

"The role of optimum connectivity in image segmentation: Can the algotirhm learn object information during the process?"

Abstract

The use of machine learning methods for image segmentation has been actively investigated. Although these approaches can learn shape and texture models from previous examples of a given object, they can only provide its location and, with some user input, its approximated delineation in a new image. On the other hand, their use combined with methods that exploit optimum connectivity for object delineation could be very effective.

After illustrating the above statement, we will focus on the role of optimum connectivity in image segmentation by presenting ways to incorporate object information in the connectivity function and the advantages of learning that information during segmentation. The framework of choice is the Image Foresting Transform (IFT), which has been successfully used to devise connectivity-based operators for image filtering, shape representation, data clustering, pattern classification, and image segmentation. In the IFT framework, an image is interpreted as a graph in which the nodes may be pixels, superpixels, or any other element from the image domain, each one represented by a point in some n-dimensional feature space. The arcs of the graph must satisfy some adjacency relation, which may be defined in the image domain and/or feature space. The IFT uses Dijkstra's algorithm, with more general connectivity functions and multiple sources, to transform the image graph into a spanning forest rooted at the minima of the resulting connectivity map, then reducing the image operator to a local processing of the forest's attributes (optimum paths, their costs, and root labels).

The lecture will cover the recent advances in the design of connectivity functions for superpixel and object delineation, the role of optimum connectivity in data clustering and semi-supervised classification, and their use to extract global object information from markers and superpixels. In this part of the lecture, we will focus on IFT-based object delineation from competing seeds, in which one optimum-path tree grows from each seed and the object is defined by the forest rooted at its internal seeds. It will then show the advantages of extracting local object information from the spanning trees during their growing process for dynamic arc-weight estimation. Finally, we will discuss open problems related to connectivity-based object delineation.

Biography

Alexandre Xavier Falcao is full professor at the Institute of Computing (IC), University of Campinas (Unicamp). He currently holds a research productivity fellowship (level 1B) from the Brazilian National Council for Scientific and Technological Development (CNPq) and serves as Senior Area Editor of the IEEE Signal Processing Letters, coordinator of the Computer Science area at FAPESP, member of the editorial board of the journal Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, and as member of the program committee of several conferences (e.g., SPIE on Medical Imaging and SIBGRAPI).

He received a B.Sc. in Electrical Engineering from the Federal University of Pernambuco in 1988, and has worked in biomedical image processing, visualization and analysis since 1991. In 1993, he received a M.Sc. in Electrical Engineering from the University of Campinas. During 1994-1996, he worked with the Medical Image Processing Group at the Department of Radiology, University of Pennsylvania, USA, on interactive image segmentation for his doctorate. He got his doctorate in Electrical Engineering from the University of Campinas in 1996. In 1997, he worked in a project for Globo TV at a research center, CPqD-TELEBRAS in Campinas, developing methods for video quality assessment.

His experience as professor of Computer Science and Engineering started in 1998 at Unicamp. At IC-Unicamp, he was Associate Director (2006-2007) and Coordinator of the Post-Graduation Program (2009-2011). In 2011-2012, he spent one-year sabbatical at the Robert W. Holley Center for Agriculture and Health (USDA, Cornell University), EUA, working on image analysis applied to plant biology.

He has research productivity fellowships from CNPq since 1998. Among several awards, it is worth mentioning two Unicamp inventor awards at the category "License Technology" (2011 and 2012), three awards of academic excellence (2006, 2011, 2016) from IC-Unicamp, one award of academic recognition "Zeferino Vaz" from Unicamp (2014), and one the best paper award in the year of 2012 from the journal Pattern Recognition (received at Stockholm, Sweden, during the conference ICPR 2014).

His research work aims at computational models to learn and interpret the semantic content of images in the domain of several applications. His research interests include image/video processing, visualization, and analysis; graph algorithms and dynamic programming; image annotation, organization, and retrieval; machine learning and pattern recognition; and image analysis applications in Biology, Medicine, Biometrics, Geology, and Agriculture.

 

Longin Jan Latecki
Temple University, USA

 latecki2016.jpg

"Computing maximum weight cliques with local search on large graphs with application to common object discovery" 

Abstract

Many practical problems in various application areas like bioinformatics, scheduling, planning and computer vision can be formulated as combinatorial optimization problems. There are two classes of algorithm for finding satisfying solutions for such problems: complete (or exact) algorithms and incomplete algorithms. Complete algorithms guarantee the optimality of the solutions, but may fail to find a good solution within reasonable time for large instances. On the other hand, incomplete algorithms, often known as local search algorithms, cannot guarantee optimality, but can usually find optimal or near-optimal solutions within reasonable time, and thus are very useful in real-world applications. Local search algorithms start from some given solution and try to find a better solution in an appropriately defined neighborhood of the current solution. If a better solution is found, it replaces the current solution and the local search is continued from there. Local search is a widely accepted approach to solving hard combinatorial optimization problems.

We focus on maximum weight clique (MWC) problem, which is the problem of finding a clique with the greatest total weight on a undirected weighted graph. Due to the NP-hardness of the MWC problem, our goal is to find a good clique within a reasonable time limit in the local search framework. This problem exists in many real-world AI applications like common object discovery, image segmentation, and clustering aggregation.

As our test application we consider the task of discovering the common objects in a set of images or in videos. Initially, object candidates are generated in each image and an undirected weighted graph is constructed over all the candidates. Each candidate serves as a node in the graph while the weight of the edge describes the similarity between the corresponding pair of candidates. The MWC corresponds to a set of object candidates sharing maximum mutual similarity, and each node in the MWC represents a discovered common object across the images.

Biography

Dr. Longin Jan Latecki is a professor of computer science at Temple University, Philadelphia. His main research interests include shape representation and similarity, object detection and recognition in images, robot perception, machine learning, and digital geometry. He received his PhD degree in 1992 and the habilitation in 1996, both from University of Hamburg, Germany. He has published over 250 research papers and books (h-index = 46). He is an editorial board member of journals Pattern Recognition and Computer Vision and Image Understanding. He received 2010 College of Science and Technology Research Excellence Award at Temple University, the annual Pattern Recognition Society Award together with Azriel Rosenfeld for the best article published in the journal Pattern Recognition in 1998. He is also the recipient of the 2000 Olympus Prize, the main annual award, from the German Society for Pattern Recognition (DAGM).

 

János Pach
EPFL, Switzerland

 Pach.jpeg

"The blessing of low dimensionality"

Abstract

We discuss some notoriously hard combinatorial problems for large classes of graphs and hypergraphs arising in geometric, algebraic, and practical applications. These structures escape the “curse of dimensionality”: they can be embedded in a bounded-dimensional space, or they have small VC-dimension or a short algebraic description. What are the advantages of low dimensionality? (a) With the help of suitable topological and algebraic separator theorems, large families of geometric objects embedded in a fixed-dimensional space can be split into  subfamilies of roughly the same size, and then the smaller families can be analyzed recursively.  (b) Geometric objects in space can be compared by a number of naturally defined partial orders, so that one can utilize the theory of partially ordered sets. (c) Graphs and hypergraphs of bounded VC-dimension admit very small epsilon-nets and can be particularly well approximated by random  sampling. (d) If the description complexity of a family of geometric objects is small, then the  “combinatorial complexity” (number of various dimensional faces) of their arrangements is also small. This typically guarantees the existence of efficient algorithms to visualize, describe, control, and manipulate these arrangements. 

Biography

János Pach is Chair of Combinatorial Geometry at EPFL, Lausanne and Research Adviser at Rényi Institute, Budapest. His main fields of interest are discrete and computational geometry, convexity, and combinatorics. He wrote more than 300 research papers. His books, ``Research Problems in Discrete Geometry'' (with Brass and Moser) and ``Combinatorial Geometry" (with Agarwal) were translated into Japanese, Russian, and Chinese. He is co-editor-in-chief of Discrete & Computational Geometry and serves on the editorial boards of ten other professional journals. For his scientific achievements, he received several prices, including the Lester R. Ford Award from the Mathematical Association of America (1990), the Rényi Prize and the Academy Award from the Hungarian Academy of Sciences (1993, 1998). He was elected ACM Fellow (2011), member of Academia Europeae (2014), and AMS Fellow (2015). He was invited speaker at the International Congress of Mathematicians in Soeul (2014).

 

Workshop on Discrete Topology and Mathematical Morphology, in honor of the retirement of Gilles Bertrand

Gilles Bertrand
ESIEE Paris, France

 Bertrand_1.jpg

"An axiomatic approach to combinatorial topology"

 Abstract

We present an attempt to build an axiomatic approach to combinatorial topology. The purpose is to introduce a ground set of definitions which allow us to describe structures and to derive basic constructions relative to that field. A main ingredient of our project is the notion of completion. Completions are inductive properties which may be expressed in a declarative way and which may be combined. Intuitively, a completion may be seen as a rewriting rule acting on sets.

In this talk, we first introduce two completions in order to define a remarkable collection of acyclic simplicial complexes, namely the collection of dendrites. We give few basic properties of this collection. Also, we present a theorem which shows the equivalence between this collection and the one made of all simplicial complexes which are acyclic in the sense of homology.

Then, we introduce several completions for describing dyads. A dyad is a pair of complexes which are, in a certain sense, linked by a "relative topology". We give some basic properties of dyads, and introduce a second set of axioms for relative dendrites. We establish a theorem which provides a link between dyads, relative dendrites, and dendrites.

At last, using again the framework of completions, we propose an extension of simple homotopy by considering homotopic pairs. Intuitively, a homotopic pair is a couple of objects (X,Y) such that X is included in Y and (X,Y) may be transformed to a trivial couple by simple homotopic deformations that keep X inside Y. Thus, these objects are linked by a "relative homotopy relation". Our main result is a theorem that makes clear the link between homotopic pairs and dyads. Thus, we prove that, in the unified framework of completions, it is possible to handle notions relative to both homotopy and homology.

Biography

Gilles Bertrand received his Ingénieur’s degree from the Ecole Centrale des Arts et Manufactures in 1976. Until 1983, he was with the Thomson-CSF company where he designed image processing systems for aeronautical applications. He received his Ph.D from the Ecole Centrale in 1986. Until 2018, he was teaching and doing research with the Computer Science and Telecommunication Department of ESIEE-Paris and with the Laboratoire d’Informatique Gaspard-Monge of Université Paris-Est. His research interests are digital and combinatorial topology, mathematical morphology, image analysis.

 

Tat Yung Kong
City University of New York, USA

TYungKong2.jpg

"Hereditarily homology-simple sets and homology critical kernels of binary images on sets of convex polytopes"

Abstract (The pdf file is available here.)

We define a binary image to be a mapping I : X → {0, 1} in which X is a set of nonempty sets (e.g., a set of cubical voxels) in a Euclidean space and I1[1] is finite: We say I is a binary image on X and call each element of I1[1] a 1 of I. For any set S of 1s of I we use the term S-intersection to mean a nonempty set that is the intersection of a nonempty subset of S. Thus an S-intersection is either an element of S or a nonempty intersection of two or more elements of S.

Let D be any set of 1s of a binary image I. If the inclusion U(I1[1] \ D) U I1[1] induces homology isomorphisms in all dimensions, then we say D is homology-simple in I. If every subset of D is homology-simple in I, then we say D is hereditarily homology-simple in I.

A local characterization of hereditarily homology-simple sets can be useful for designing parallel thinning algorithms or for checking the topological soundness of proposed parallel thinning algorithms. When I is a binary image on the grid cells of a Cartesian grid of dimension ≤ 4, it can be deduced from results of Bertrand and Couprie that the sets D of 1s that are hereditarily homology-simple in I can be locally characterized as follows in terms of Bertrand's concept of critical kernel:

  • A set D ⊆ I1[1] is hereditarily homology-simple in I if and only if every D-intersection in I’s critical kernel is a subset of a 1 of I that is not in D.

After discussing this characterization and some of its consequences, we will explain how we can generalize it to a local characterization of hereditarily homology-simple sets of 1s in any binary image I on an arbitrary set of convex polytopes of any dimension. To do this, we need only replace ``I's critical kernel'' in the above characterization with ``I's homology critical kernel''. We define the latter to be the set of all I1[1]-intersections c for which the intersection of c with the union of the 1s of I that do not contain c either is empty or is disconnected or has non-trivial homology in some positive dimension.

Biography

T. Yung Kong is Professor of Computer Science at Queens College of the City University of New York. Much of his published work has dealt with topological problems relating to binary images, especially problems of digital topology. He has also worked on some topics in other research areas (such as the theory of fuzzy segmentation and algorithms for convex feasibility in recent years). A member of the Computer Science faculty at Queens College since 1989, he previously held faculty positions in computer science at Ohio University and in mathematics at City College of the City University of New York. In 2000-01 he was a Visiting Professor in the Computer and Information Sciences Department at Temple University. He has a B.A. in mathematics and an M.Math. from the University of Cambridge (earned in 1981 and 1982). In 1982-85 he was a doctoral student at the Oxford University Computing Laboratory, where he pursued research on digital topology. He received a D.Phil. from the University of Oxford in 1986.

 

Philippe Salembier
Universitat Politècnica de Catalunya, Spain

"Processing Radar Images with Hierarchical Region-Based Representations and Graph Signal Processing Tools"

Abstract

This talk will discuss the interest of hierarchical region-based representations of images such as maxtree, mintree and Binary Partition Trees for radar images. These representations can be considered as an initial abstraction from the signal in which raw pixels are grouped to form regions which are hierarchically structured by inclusion in a tree. They provide multiple resolutions of description and easy access to subsets of regions. This approach and the associated notions will be discussed for both maxtree description of Synthetic Aperture Radar (SAR) image and for Binary Partition Tree for Polarimetric SAR images.

Once constructed, these hierarchical representations can be used for many applications including filtering, segmentation, classification and object detection. Many processing strategies consist in populating the tree with features of interest for the application and in applying a specific graph-cut called pruning. These pruning ideas will be illustrated in particular for polarimetric SAR image segmentation and speckle reduction.

The tree representation itself is a specific graph structure. As a result, an alternative processing strategy consists in populating the tree with attributes but considering the resulting data as graph attribute signals which can be processed with graph filters. The goal of this filtering step is to exploit the correlation existing between attribute values on neighboring tree nodes. Considering that trees are specific graphs where the connectivity towards ancestors and descendants may have a different meaning, several filtering strategies can be defined. Beside classical Graph filters, two new filtering notions can be used: Tree and Branch filters. These ideas will be illustrated in the context of ship detection in SAR images.

Biography

Philippe Salembier received an engineering degree from the Ecole Polytechnique, Paris, France, in 1983 and an engineering degree from the Ecole Nationale Supérieure des Télécommunications, Paris, France, in 1985. From 1985 to 1989, he worked at Laboratoires d'Electronique Philips, Limeil-Brevannes, France, in the fields of digital communications and signal processing for HDTV. In 1989, he joined the Signal Processing Laboratory of the Swiss Federal Institute of Technology in Lausanne, Switzerland, to work on image processing and received a PhD in 1991. At the beginning of 1992, after a stay at the Harvard Robotics Laboratory as a Postdoctoral Fellow, he joined the Technical University of Catalonia, Barcelona, Spain, where he is currently professor lecturing on the area of digital signal and image processing.

His research interests include image and video sequence processing, compression and indexing, mathematical morphology, level sets and nonlinear filtering, as well as remote sensing image processing and signal processing tools for genomics. In terms of standardization activities, he has been involved in the definition of the MPEG-7 standard ("Multimedia Content Description Interface") as chair of the "Multimedia Description Scheme" group between 1999 and 2001.

He served as a Associate Editor of various journals including the Journal of Visual Communication and Image Representation (Academic Press), Signal Processing (Elsevier), Signal Processing: Image Communication(Elsevier), the Eurasip Journal on Image and Video Processing, the IEEE Transactions on Image Processing, the IEEE Transactions on Circuits and Systems for Video Technology and the IEEE Signal Processing Letters.He was member of the Image and Multidimensional Signal Processing Technical Committee of the IEEE Signal Processing Society between 2000-2006, an AdCom officer of the European Association for Signal Processing (EURASIP) between 1994-1999 and was technical chair (with Prof. Ed. Delp) of the IEEE Int. Conf. on Image Processing, ICIP'2003 organized in Barcelona. Philippe Salembier is a Fellow of the IEEE.

 

Workshop on Combinatorics, Geometry and Statistics

Pierre Alquier
ENSAE, France

 PierreAlquier_1.jpg

"Theoretical Guarantees for Approximate Bayesian Inference"

Abstract

While Bayesian methods are extremely popular in statistics and machine learning, their application to massive datasets is often challenging, when possible at all. Indeed, the classical MCMC algorithms targeting the exact posterior are prohibitively slow when both the model dimension and the sample size are large or when the likelihood is not tractable. Fast algorithms were proposed, at the price of targeting approximations of the posterior. In this talk, I will discuss two approaches.

The first approach is an approximate Metropolis-Hastings algorithm (MH). In MH, a simulation from the transition kernel P requires a computation of the likelihood that might be expensive. We propose an approximation Q of P leading to a fast algorithm. We control explicity the total variation distance (TV) between the posterior and the distribution of the simulations. This algorithm was proposed in Alquier, Friel, Everitt and Boland (Statistics and Computing, 2016) under the name "Noisy-MCMC". I will also mention recent results by Rudolf and Schweizer (2018) who relaxed the assumptions of our results by using the Wasserstein distance instead of TV.

The second approach is Variational Bayesian inference (VB). VB aims at approximating the posterior by a distribution in a tractable family. Thus, MCMC are replaced by an optimization algorithm which is orders of magnitude faster. VB methods have been applied in such computationally demanding applications as including collaborative filtering, NLP and text processing... However, despite nice results in practice, the theoretical properties of these approximations are usually not known. I will present conditions ensuring the asymptotic concentration of the variational approximation of the posterior around the true parameter. These results are taken from our recent works: Alquier and Ridgway (2017) and Chérief-Abdellatif and Alquier (2018).

Biography

Pierre Alquier obtained his PhD from Univerisité Pierre et Marie Curie (2006). He worked as a lecturer in Université Paris Diderot (2007-2012), and then at University College Dublin (2012-2014). Currently he is a professor in statistics, ENSAE ParisTech (2014-now). His research interests are high dimensional statistics and machine learning, PAC-Bayesian bounds. More recently, he has focused on the theoretical analysis of computational methods in Bayesian statistics such as Monte Carlo methods and variational inference.

 

Dömötör Pálvölgyi
Eötvös Loránd University

 Palvolgyi_Domotor_small.jpg

"News about the chromatic number of the plane"

Abstract

At least how many colors do we need to color all the points of the plane such that every pair of points at unit distance receive different colors? This question is known as the Hadwiger-Nelson problem and until one year ago it was only known that 4 colors are needed and 7 are enough. Then in April 2018 gerontologist de Grey showed in a breakthrough result that 4 colors are not enough. His proof was computer assisted, and a collaborative Polymath project was started to find a human-verifiable proof. I will give a short introduction to the classical results, then talk about the current ongoing research with plenty of open problems.

Biography

Dömötör Pálvölgyi obtained his PhD degree in mathematics from the Ecole Polytechnique Fédérale de Lausanne (EPFL) in 2010, after which he worked in various positions at the Mathematical Institute of the Eötvös Loránd University (ELTE Budapest). He was also a visiting professor at IIT Delhi (2012 spring) and a research fellow at the University of Cambridge (2015-2017). He won the Grünwald Géza medal of the János Bolyai Mathematical Society (2011), the János Bolyai Scholarship of the Hungarian Academy of Sciences (2012), the Postdoctoral Fellowship of the Hungarian Scientific Research Fund (2012), the Marie Sklodowska-Curie Fellowship (IF-EF) of the European Commission (2015) and the Momentum (Lendület) grant of the Hungarian Academy of Sciences (2017).

 

Csaba D. Tóth
CSUN and Tufts University, USA

 Csaba D. Toth

 "Online Unit Clustering and Covering in Euclidean Space"

Abstract

Given a set of n points in a metric space, that arrive one by one, the Unit Clustering problem consists of partitioning the points into the minimum number of clusters (subsets) of diameter at most one; while the Unit Covering problem consists of covering all points by the minimum number of balls of unit radius. In this talk, we explore how the competitive ratio of online algorithms depend on the diemension in Euclidean spaces. Under the L norm, we show that the competitive ratio of any online algorithm (deterministic or randomized) for Unit Clustering is Ω(d). We also give a randomized online algorithm with competitive ratio O(d2) for Unit Clustering of integer points. The competitive ratio of any deterministic online algorithm for Unit Covering under L is at least 2d; and this ratio is the best possible. Under the L2 norm, we describe an online deterministic algorithm with competitive ratio O(1.321d) for Unit Covering, improving on the earlier record by an exponential factor; and give a lower bound of d+1 for the competitive ratio of any algorithm for d≥1 (Joint work with Adrian Dumitrescu and Anirban Ghosh.) 

Bibliography

Csaba D. Tóth is a professor of mathematics at California State University Northridge, located in the city of Los Angeles, and a visiting professor of computer science at Tufts University in the Boston metro area. He is the author of more than 90 papers in discrete and computational geometry, and an editor of the 3rd edition of the Handbook of Discrete and Computational Geometry. His main research interests are in hierarchical subdivisions in low-dimensional spaces, topological graph theory, and geometric optimization.

Online user: 1