2024/07/18: 9h30
Speaker : Pierre-Henri Wuillemin (LIP6, Sorbonne Université) Title : Probabilistic Graph Models and Causal models for AI ( Modèles probabilistes graphiques, modèles causaux )
Abstract: La modélisation probabiliste est souvent considéré comme complexe, nécessitant un nombre exponentiel de paramètres, même dans le cas discret. Dans cette présentation, nous nous attacherons à décrire et à motiver les modèles graphiques probabilistes. Après avoir abordé quelques propriétés importantes, nous montrerons en quoi ces mêmes modèles graphiques peuvent servir de support à une analyse quantitative causale, ramenant dans le champs des mathématiques et de la statistique une notion qui a longtemps été considérée pour le moins peu formalisable. (Cette présentation se basera principalement sur les travaux de Judea Pearl, prix Turing 2011)
Pause Cafe : 10:30 — 11:00
11h00
Speaker : Christophe Gonzales (COALA, LIS)
Title : A brief overview of decision under uncertainty
Abstract: Decision under uncertainty is pervasive in artificial intelligence. The goal of this tutorial is to review some popular decision models and highlight their connections as well as the situations in which they can be applied. We will start with the expected utility model (EU) and show the properties it relies on. Then, we will present decision trees, that represent sequential decision problems, and show that the aforementioned properties allow for an efficient algorithm to solve them. Interpreting these trees differently will lead to another more efficient model called an influence diagram. The EU model is known to have severe limitations and, in some situations, more general decision models are needed. Based on a rephrasing of EU, we will present the more general rank dependent utility model (RDU). We will also show the issues of RDU w.r.t. sequential decision making. Another path to generalize EU is to substitute probabilities by other models, notably belief functions. We will show that the EU’s properties presented at the beginning of the talk can also be applied to belief functions, hence resulting in the belief expected utility (BEU) model. We will conclude this talk, mentioning briefly other popular decision models.
2024/03/28: 9h30
Speaker : Pablo Arrighi (LMF, Université Paris-Saclay) Title : Past, future, what’s the difference?
Abstract: The laws of Physics are time-reversible, making no qualitative distinction between the past and the future—yet we can only go towards the future. This apparent contradiction is known as the ‘arrow of time problem’. Its current resolution states that the future is the direction of increasing entropy. But entropy can only increase towards the future if it was low in the past, and past low entropy is a very strong assumption to make, because low entropy states are rather improbable, non-generic. Recent works from the Physics literature suggest, however, we may do away with this so-called ‘past hypothesis’, in the presence of reversible dynamical laws featuring expansion. We prove that this is the case, for a synchronous graph rewriting-based toy model. It consists in graphs upon which particles circulate and interact according to local reversible rules. Some rules locally shrink or expand the graph. Generic states always expand; entropy always increases—thereby providing a local explanation for the arrow of time. This discrete setting allows us to deploy the full rigour of theoretical Computer Science proof techniques.
Pause Cafe : 10:30 — 11:00
11h00
Speaker : Pierre Ohlmann (MoVe, LIS)
Title : FO-model checking over monadically stable graphs classes
Abstract: In this talk, we will survey recent results (mostly not by me) about efficient model-checking of first-order logic sentences over graphs. We will aim to give a general overview of this very active field, and discuss the different ingredients that come into play. Time allowing, we will go into more details about one of these ingredients: the Flipper game for characterizing monadically stable classes.
2024/02/01: 9h30
Speaker : Guilherme Dias da Fonseca (LIS)
Title : Shadoks Approach to Convex Covering and Other CG:SHOP Challenges
Abstract: The CG:SHOP Challenge is a geometric optimization challenge organized annually as part of the prestigious SoCG conference. A different problem is proposed each year and the Shadoks team obtained several good results in the last 6 years. We describe these 6 problems, general algorithmic ideas, and strategy used in CG:SHOP 2023. The CG:SHOP 2023 challenge consisted of 206 instances, each being a polygon with holes. The goal was to cover each instance polygon by a small number of convex polygons inside the instance polygon (fewer polygons mean a better score). Our strategy was the following. We find a big collection of large (often maximal) convex polygons inside the instance polygon and then solve several set cover problems to find a small subset of the collection that covers the whole polygon.
( Paper: https://arxiv.org/abs/2303.07696 )
10h10
Speaker : Bruno Lévy (INRIA, U. Paris-Saclay)
Title : A Lagrangian method for fluids with free boundaries
Abstract: In this presentation, I’ll describe a numerical simulation method for free-surface fluids. I will start by giving an intuitive understanding of the physical phenomena involved in fluid dynamics, pressure, viscosity and surface tension. Then I will detail the numerical simulation method, based on the Gallouet-Mérigot numerical scheme, that describes the fluid as a set of cells, that can deform, but that keep a constant volume, and that follow the motion of the fluid (Lagrangian method). The constant volume constraint takes the form of a partial semi-discrete optimal transport. I will present the geometric and numerical aspects of this optimal transport problem.
The volume conservation constraint takes the form of a partial semi-discrete optimal transport problem. The solution of this transport problem determines the nature of the cells, that correspond to the intersection between Laguerre cells and balls.
( Pause Café : 11:10 – 11:30 )
11h30
Speaker : Alexandra Bac (LIS)
Title : Combinatorial duality
Abstract: Alexander duality establishes the relation between the homology of an object and the cohomology of its complement in a sphere. For instance, if X is a subset of the 2-dimensional sphere S2, then each hole of X corresponds to a connected component of S2 \ X, and by symmetry, each hole of S2 \ X corresponds to a connected component of X.
In this work we present a new combinatorial and constructive proof of Alexander duality that provides an explicit isomorphism. The proof shows how to compute this isomorphism using a combinatorial tool called homological colorings. It also provides a one-to-one map between the holes of the object and the holes of its complement, which we use for representing the holes of an object embedded in R3.
Speaker : Nicolas Peltier (CR CNRS, LIG Grenoble)
Title : A proof procedure in separation logic with inductive definitions
Abstract: Separation logic is used in program verification to facilitate reasoning on the dynamically allocated memory. In this talk, I will review recent results concerning the automation of reasoning in Separation logic with inductively defined predicates. In particular I will focus on an inductive proof procedure that is sound, complete and terminating for a fragment of inductive definitions satisfying the so called PCE conditions (Progress, Connectivity, Establishment).
Speaker : Clara Bertolissi (LIS)
Title : Access control policies specification and analysis
Abstract: Access control is a fundamental aspect of computer security; it aims at protecting resources from non-authorised users. The generalised use of access control in modern computing environments has increased the need for high-level declarative languages that enable security administrators to specify a wide range of policy models. In this talk we introduce the main notions of access control and define a declarative meta-model, called CBAC, able to subsume many of the most well known access control models (e.g., MAC, DAC, RBAC). We also design a graphical representation of CBAC policies that aims at easing the specification and verification tasks for security policy administrators. Using such representation of policies, answers to usual administrator queries can be automatically computed, and several properties of access control policies checked.
Speaker : Cameron Calk (Postdoc, LIS)
Title : Coherence via rewriting in higher Kleene algebras
Abstract: Squier’s coherence theorem and its generalisations provide a categorical interpretation of contracting homotopies via confluent and terminating rewriting. In particular, this approach relates standardisation to coherence results in the context of higher dimensional rewriting systems. On the other hand, modal Kleene algebras (MKAs) have provided a description of properties of abstract rewriting systems, and classic (one-dimensional) consistency results have been formalised in this setting.
In this talk, I will recall the notion of higher Kleene algebra, introduced as an extension of MKAs, and which provide a formal setting for reasoning about (higher dimensional) coherence proofs for abstract rewriting systems. First, I will briefly discuss how rewriting techniques are captured in MKAs before showing how these techniques may be extended to higher dimensions.
Speaker : Liva Ralaivola ( Criteo AI Lab) Title : Differentially-Private Sliced Wasserstein Distance Abstract: Developing machine learning methods that are privacy preserving is today a central topic of research, with huge practical impacts. Among the numerous ways to address privacy-preserving learning, we here take the perspective of computing the divergences between distributions under the Differential Privacy (DP) framework — being able to compute divergences between distributions is pivotal for many machine learning problems, such as learning generative models or domain adaptation problems. Instead of resorting to the popular gradient-based sanitiziation method for DP, we tackle the problem at its roots by focusing on the Sliced Wasserstein Distance and seamlessly making it differentially private. Our main contribution is as follows: we analyze the property of adding a Gaussian perturbation to the intrinsic randomized mechanism of the Sliced Wasserstein Distance, and we establish the sensitivity of the resulting differentially private mechanism. One of our important finding is that this DP mechanism transforms the Sliced Wasserstein distance into another distance, that we call the Smoothed Sliced Wasserstein Distance. This new differentially-private distribution distance can be plugged into generative models and domain adaptation algorithms in a tranparent way, and we empirically show that it yields highly competitive performance compared with gradient-based DP approaches from the literature, with almost no loss in accuracy for the domain adaptation problems that we consider. (Joint work with A. Rakotomamonjy, Criteo AI Lab)
Speaker : Van-Giang Trinh (LIS)
Title : Efficient enumeration of fixed points in complex Boolean networks using answer set programming
Abstract:
Boolean Networks (BNs) are an efficient modeling formalism with applications in various research fields such as mathematics, computer science, and more recently systems biology. One crucial problem in the BN research is to enumerate all fixed points, which has been proven crucial in the analysis and control of biological systems. Indeed, in that field, BNs originated from the pioneering work of R. Thomas on gene regulation and from the start were characterized by their asymptotic behavior: complex attractors and fixed points. The former being notably more difficult to compute exactly, and specific to certain biological systems, the computation of stable states (fixed points) has been the standard way to analyze those BNs for years. However, with the increase in model size and complexity of Boolean update functions, the existing methods for this problem show their limitations. To our knowledge, all the state-of-the-art methods for the fixed point enumeration problem rely on Answer Set Programming (ASP). Motivated by these facts, in this work we propose two new efficient ASP-based methods to solve this problem. We evaluate them on both real-world and pseudo-random models, showing that they vastly outperform four state-of-the-art methods as well as can handle very large and complex models.
Speaker : Sebastien Tavenas ( LAMA, Université de Savoie Mont-Blanc)
Title : SuperPolynomial lower bounds for circuits of constant depth
Abstract:
Any multivariate polynomial P(X_1,…,X_n) can be written as a sum of monomials, i.e., a sum of products of variables and constants of the field. The natural size of such an expression is the number of monomials. But, what happens if we add a new level of complexity by considering expressions of the form: sum of products of sums (of variables and constants)? Now it becomes less clear how to show that a given polynomial has no small expression. In this talk we will solve exactly this problem. More precisely, we prove that some explicit polynomials do not have polynomial-sized sum-of-products-of-sums (SPS) representations. We can also obtain similar results for SPSPs, SPSPSs, etc. for all expressions of constant depth. We will see also why this problem can be seen an important step for obtaining lower bounds on the size of algebraic circuits.
Speaker : Antonio E. Porreca (CANA, LIS)
Title : The algebra of alternation and synchronisation of finite dynamical systems
Abstract:
Many phenomena of scientific and engineering interest can be modelled as finite dynamical systems, that is, as deterministic transition functions iterated over an abstract set of states. Dynamical systems can be put together (or, from the opposite point of view, analysed) in terms of composition operations, among which two of the most basic ones are alternative and synchronous execution. These give rise to a commutative semiring (a “ring without subtraction”) over dynamical systems up to isomorphism, with some rather complex algebraic properties, notably the lack of unique factorisation into irreducibles. Several interesting decomposition properties can then be expressed in terms of polynomial equations. We discuss computability and complexity issues in solving these equations (most interesting cases are intractable, and the general problem is undecidable), as well as some properties of irreducible systems and the (possible) existence of prime ones.
Speaker : Bernadette Charron-Bost ( DIENS )
Title : Distributed Computation of the Average.
Abstract: Computing the average of initial values in a multi-agent system is a fundamental problem in distributed control theory. Unfortunately, an impossibility result by Boldi and Vigna shows that deterministic, anonymous agents communicating by broadcast cannot compute functions dependent on the multiplicity of the input values. In particular, they cannot compute the average of their initial values.
I will first recall the proof developed by Boldi and Vigna using the theory of graph fibrations, and will then review various approaches for circumventing the impossibility result of computing the average. I will notably present a randomized algorithm that computes the average in linear time in the size $n$ of the network, but only with high probability. I will also present a deterministic algorithm, inspired by the popular Metropolis-Hasting method in statistical physics, that computes the average in $O(n^4)$ rounds, under the additional assumption of bidirectional links.
(Joint work with Patrick Lambein-Monette.)
Speaker : Oscar Defrain (LIS – ACRO team)
Title : Minimal dominating sets enumeration in bipartite graphs.
Abstract : Minimal dominating sets enumeration has gained considerable attention since it has been shown equivalent to the long-standing open problem of dualizing a pair of monotone Boolean functions, i.e., deciding whether two positive CNF and DNF are equal. This result, obtained by Kanté, Limouzy, Mary and Nourine in 2014, holds even in the very restricted case of co-bipartite graphs (the vertex set of the graph may be partitioned into two cliques), and was strengthened by a tractable result on split graphs (the vertex set may be partitioned into a clique and an independent set). This has left the case of bipartite graphs (the vertex set may be partitioned into two independent sets) at the center of the attention in the subsequent years. We show it to be tractable inspired by a technique by Eiter, Gottlob and Makino from 2003.
(This is a joint work with Marthe Bonamy, Marc Heinrich, Michał Pilipczuk, and Jean-Florent Raymond.)
Speaker : Pablo Concha Vega (LIS – CANA team)
Title : On the Complexity of Stable and Biased Majority.
Abstract: A majority automata is a two-state cellular automata, where each cell updates its state according to the most represented state in its neighborhood. A question that naturally arises in the study of these dynamical systems asks whether there exists an efficient algorithm that can be implemented in order to compute the state configuration reached by the system at a given time-step. This problem is called the prediction problem. In this work, we study the prediction problem for a more general setting in which the local functions can be different according to their behavior in tie cases. We define two types of local rules: the stable majority and biased majority. The first one remains invariant in tie cases, and the second one takes the value 1. We call this class the heterogeneous majority cellular automata (HMCA). For this latter class, we prove that in two or more dimensions, the problem is P-Complete as a consequence of the capability of the system of simulating Boolean circuits.
Titre : Exploiting the unreasonable effectiveness of geometry in computing
Résumé : While the fields of geometric design and numerical simulation have mostly evolved independently, we are now witnessing a convergence of thought: from isogeometric analysis, to geometric integrators and discrete exterior calculus, it has been repeatedly noted that the quality of computational tools ultimately boil down to properties of a fundamentally geometric or topological nature. This talk will describe a brief overview of our approach to computing through the lens of geometry to offer a versatile and efficient toolbox for a variety of applications, from shape processing to tangent vector field editing, and from fluid simulation to non-linear dimensionality reduction if time allows. We will point out how a strong grasp of classical differential geometry paired with a good understanding of the typical computational constraints in research and industry can bring forth novel theoretical and practical foundations for general-purpose computations. The importance of preserving differential geometric notions in the discrete setting will be a recurring theme throughout the talk to demonstrate the value of a geometric approach to computations.
Mathieu Desbrun, GeomeriX team, Inria/Ecole PolytechniqueBio: After obtaining a PhD in computer graphics in Grenoble, France, Desbrun joined Caltech as a postdoctoral fellow in 1998. He joined the CS department at the University of Southern California as an Assistant Professor in January 2000, where he remained for four years in charge of the GRAIL lab. He then became an Associate Professor at Caltech in the CS department in 2003, where he started the Applied Geometry lab and was awarded the ACM SIGGRAPH New Researcher award. He took on administrative duties after he became a full professor, becoming the founding chair of the Computing + Mathematical Sciences department and the director of the Information Science and Technology initiative from 2009 to 2015. More recently, he has been the Technical Papers Chair for the ACM SIGGRAPH 2018 conference, spent a sabbatical year at ShanghaiTech in the School of Information Science and Technology, and was elected as ACM Fellow in 2020. He is now working at LIX as both an advanced researcher at Inria Saclay, and a Professor at Ecole Polytechnique. He started the Geomerix research team with three local colleagues to focus on geometric numerics, covering data analysis, machine learning, and simulation.
Titre : A Heuristic for Short Homology Basis of Digital Objects
Résumé : Finding the minimum homology basis of a simplicial complex is a hard problem unless one only considers the first homology group. In this talk, we introduce a general heuristic for finding a short homology basis of any dimension for digital objects (that is, for their associated cubical complexes) with complexity $\mathcal{O}(m^3 + \beta_q \cdot n^3)$, where $m$ is the size of the bounding box of the object, $n$ is the size of the object and $\beta_q$ is the rank of its q-th homology group. Our heuristic makes use of the thickness-breadth balls, a tool for visualizing and locating holes in digital objects. We evaluate our algorithm with a data set of 3D digital objects and compare it with an adaptation of the best current algorithm for computing the minimum radius homology basis by Dey, Li and Wang.
Orateur : Yann-Situ, LIS
Titre : Étude d’objets homologiques computationnels
Résumé : En géométrie algorithmique, de nombreux travaux ont pour objectif d’analyser, comprendre ou classifier des formes. En raison de ses propriétés computationnelles, l’homologie constitue un descripteur topologique intéressant. En effet, il est possible de calculer un grand nombre d’informations homologiques à partir d’une représentation discrète d’objets (complexes de chaîne, complexes simpliciaux…). Il existe diverses méthodes de calcul homologique, fournissant des informations plus ou moins complexes et pertinentes. Nous introduirons ainsi plusieurs objets mathématiques associés à ces informations homologiques. Nous discuterons de leurs différences, leurs propriétés calculatoires, leurs relations d’un point de vue théorique et nous présenterons certaines pistes de recherche.
Orateur : Olga Kouchnarenko (website, scholar), femto-st & INRIA Lille, project team SPIRALS Titre : Testing Dynamic Reconfigurations of Component-based Systems w.r.t. Adaptation Policies Résumé : Component systems are designed as sets of components that may reconfigure themselves according to adaptation policies. Adaptation policies are seen as artifacts that describe desirable behavior of the system without enforcing a specific one. An adaptation policy is designed as a set of rules that indicate, for a given set of configurations, which reconfiguration operations can be triggered, with fuzzy values representing their respective utility. In this context, we present a model-based testing approach which aims to generate large test suites in order to measure the occurrences of reconfigurations and compare them to their utility values specified in the adaptation rules. This process is based on a usage model of the system used to stimulate the system and provoke reconfigurations. As the system may reconfigure dynamically, this online test generator observes the system responses and evolution in order to decide the next appropriate test step to perform. In the end, the relative frequencies of the reconfigurations are measured in order to determine if the adaptation policy is faithfully implemented. The approach is illustrated by simulations for platoons of (semi-)autonomous vehicles.
Orateur : Rim Saddem ( scholar), LIS, equipe Mofed Titre : Business Processes Meet Spatial Concerns: the sBPMN Verification Framework. Résumé : BPMN is the standard for business process modeling. It includes a rich set of constructs for control-flow, inter-process communication, and time-related concerns. However, spatial concerns are left apart while being essential to several application domains. We propose a comprehensive extension of BPMN to deal with this. Our proposal includes an integrated notation, a first-order logic semantics of the extension, and tool-supported verification means through the implementation of the semantics in TLA+. Our tool support and our model database are open-source and freely available online.
Orateur : Julie Parreaux ( website), LIS, equipe Move Titre : Playing Stochastically in Weighted Timed Games to Emulate Memory. Résumé : Weighted timed games are two-player zero-sum games played in a timed automaton equipped with integer weights. We consider optimal reachability objectives, in which one of the players, that we call Min, wants to reach a target location while minimising the cumulated weight. While knowing if Min has a strategy to guarantee a value lower than a given threshold is known to be undecidable (with two or more clocks), several conditions, one of them being the divergence, have been given to recover decidability. In such weighted timed games (like in untimed weighted games in the presence of negative weights), MinPl may need finite memory to play (close to) optimally. This is thus tempting to try to emulate this finite memory with other strategic capabilities. In this work, we allow the players to use stochastic decisions, both in the choice of transitions and of timing delays. We give for the first time a definition of the expected value in weighted timed games, overcoming several theoretical challenges. We then show that, in divergent weighted timed games, the stochastic value is indeed equal to the classical (deterministic) value, thus proving that Min can guarantee the same value while only using stochastic choices, and no memory. This work is join with Benjamin Monmege and Pierre-Alain Reynier.
Orateur : Joao Marques-Silva (website, scholar), IRIT ANITI Titre : Logic-Based Explainability in Machine Learning Résumé : The forecast applications of machine learning (ML) in high-risk and safety-critical applications hinge on systems that are robust in their operation and that can be trusted. This talk gives a broad overview of ongoing efforts in applying logic-enabled automated reasoning tools for explaining black-box ML models. The talk details the computation of rigorous explanations for the predictions black-box models, and illustrates how these serve to assess the quality of widely used heuristic explanation approaches. The talk also overviews properties relating different kinds of rigorous explanations. Finally, the talk briefly overviews a number of emerging topics of research in formal explainability.
Orateur : Stéphane Grandcolas, LIS Titre : WRM: a metaheuristic algorithm for ship weather routing Résumé : In recent years ship weather routing has attracted a lot of interest, resulting of the significant increase of transport by sea. The primary objective is to limit the costs, but other parameters, such as time, safety or preservation of the environment, can also be considered. These aspects and the fact that the domains of the variables are continuous make the problem difficult to solve. We propose a metaheuristic algorithm called WRM (for Weather Routing Metaheuristic) that aims at finding routes of minimal cost within a given time period. The cost can be the fuel oil consumption, the amount of greenhouse gas emissions or any other measure. It depends on the weather conditions that are expected and on the speed of the vessel, which can vary all along the route. Constraints forbidding or penalizing the navigation in specific conditions or in some given areas can be easily enforced. The method is simple and general. It converges progressively towards the most promising regions, generating new potential way points which are not derived from a predefined mesh. Simulating the fuel oil consumption of the vessel according to the expected wind and waves conditions, we have performed experimentations that show the efficiency of the approach.
Orateur : Matthieu Py, LIS Titre : Production de certificats pour le problème Max-SAT Résumé : Dans cet exposé, on s’intéresse à la production de certificats pour le problème de satisfiabilité maximum. Le problème de satisfiabilité maximum, ou problème Max-SAT, consiste, étant donnée une formule propositionnelle sous forme normale conjonctive, à trouver comment affecter à chaque variable de la formule une valeur booléenne afin de satisfaire le plus de clauses possible, ou, de manière équivalente, de falsifier le moins de clauses possible. Les travaux présentés portent sur la génération de certificats d’optimalité pour le problème Max-SAT, c’est à dire, non seulement être en capacité de donner le nombre minimum de clauses devant être falsifiées, mais produire en plus une preuve d’optimalité de la valeur trouvée.
Pour produire de tels certificats, on s’intéresse d’abord à une première problématique qui est l’adaptation des réfutations par résolution en max-réfutations. Une réfutation par résolution est un certificat d’insatisfiabilité d’une formule, autrement dit c’est une preuve qu’un ensemble de clauses ne peut être satisfait. Ces réfutations par résolution sont très utiles dans le cadre du problème SAT, mais sont inutilisables dans le cadre du problème Max-SAT. En effet, si exhiber une réfutation par résolution d’une formule permet bien de démontrer qu’il est toujours nécessaire de falsifier au moins une clause de la formule, exhiber dix réfutations par résolution d’une même formule ne permet pas de démontrer qu’il est toujours nécessaire de falsifier au moins dix clauses. Il est donc nécessaire d’adapter ces réfutations par résolution pour qu’elles soient utilisables dans le cadre de Max-SAT. Ces réfutations par résolution, adaptées pour Max-SAT et appelées max-réfutations, formeront un maillon essentiel des certificats pour le problème Max-SAT.
Grâce aux travaux portant sur l’adaptation des réfutations par résolution en max-réfutation, on montrera comment générer des certificats pour le problème Max-SAT. Pour cela, on fait itérativement appel à un oracle SAT pour calculer une réfutation par résolution de la formule courante, puis on transforme cette réfutation en max-réfutation et l’applique sur la formule courante. Ce processus est répété jusqu’à ce que la formule courante devienne satisfiable, et l’ensemble des max-réfutations calculées plus l’interprétation permettant de satisfaire la dernière formule calculée forment un certificat pour le problème Max-SAT. Ainsi, pour démontrer que l’optimum Max-SAT d’une formule est de satisfaire 40 clauses d’une formule possédant 50 clauses, on pourra calculer et exhiber 10 max-réfutations (afin de démontrer qu’il est nécessaire de falsifier au moins 10 clauses) et exhiber une interprétation satisfaisant la formule obtenue après application des 10 max-réfutations (afin de démontrer qu’il est possible de falsifier exactement 10 clauses).