Lundi 4 février
We will develop the concept of a quantum computation
which will serve to introduce the basic ingredients of quantum physics
from a computer science point of view. We will describe various examples
computation is believed to offer an exponential benefit over classical computations, including an outline of Shor's efficient quantum factoring algorithm, the quantum Fourier transform and its generalisation to arbitrary finite groups (if time permits). In the second talk we will consider the subject of communication complexity (i.e. distributed computations) where quantum effects -- now exploiting so-called quantum non-locality -- can provide remarkable benefits over classical protocols. After setting the scene, we will discuss some examples and also include a discussion of the fundamental process known as quantum teleportation.
Albert Burroni, Automates à plusieurs dimensions
La structure de monoïde est liée à l'étude des automates finis et celle de la réécriture de mots. Une généralisation naturelle de la structure de monoïde est celle de catégorie puis, pour les dimensions supérieures, celle de n-catégorie (n entier naturel). Les structures de calcul de l'informatique théorique (automates, réseaux, grammaires, logique équationnelle, ...) sont liées à l'étude des n-catégories pour n = 1, 2, ou 3 et des généralisations sont faciles à décrire dans ce cadre. Toutes ces structures se ramènent à une seule, celle de logographe.
Marc Jaekel, Physical time and synchronization
The physical implementation of computation relies on a representation of time. However, the notion of time has been the subject of several changes in the development of modern physics, and different notions of time are still commonly used in theoretical and applied frameworks. After a brief review of the properties pertaining to Newtonian, relativistic and quantum time, we discuss the synchronization and localization procedures underlying the explicit construction of relativistic space-time. We emphasize some consequences for the notion of causality, and for the physical implementation of computation. Comparing synchronous and asynchronous circuits, we show that they reflect two different choices of a causal structure, namely the Newtonian and the relativistic one.
Sylvain Lippi, Programming with interaction nets
Interaction nets introduced by Yves Lafont are a generalization of Proof nets of Jean-Yves Girard. It was originally presented as a model of computation but we rather focus on the programming language viewpoint. Indeed we present a concrete implementation of a graphical interpreter for the interaction nets. We first present the interaction combinators and the mechanism of duplication then we show how this new paradigm can be used to implement some classical algorithms. Finally, we give a surprisingly simple translation into Prolog.
Yves Lafont, Algebraic theory of circuits
Boolean circuits with several outputs are a model of classical computation. They can be obtained from a finite set of structural and logical gates using parallel and sequential composition. There is also a notion of equivalence of boolean circuits which can be derived from a finite set of relations. We are looking for presentations by generators and relations corresponding to reversible and quantum computation. Up to now, we only achieved a little part of this ambitious program : linear reversible computation. For that purpose, we introduce a notion of canonical form which is also a strategy for solving linear systems. There are deep motivations for developping such a theory.
Philippe Matherat, Circuits asynchrones
Most today's digital electronic chips are "synchronous" : they change their internal state after each tic of a global clock. The model of computation can ignore parallelism, even though each of these chips can include millions of logical gates. In opposition, "asynchronous" circuits are defined by the absence of clock. They constitute a more general class of circuits. As in the synchronous case and in order to manage the design of very complex chips it is important that logical design and physical design can be well separated. Circuits for which the result of a computation is not dependant of the delays in communications define the class of "Delay-insensitive circuits". It is possible to find a simple set of DI primitives circuits and a composition law which preserves the Delay-Insensitive property.
François Lamarche, Operads and geometry of computation
The course will concentrate on Joyal's theory of species of structure, leading to the definition of operads by the means of species, and ending with some quick examples and applications of them. Operads and species provide a rich geometric and algebraic framework for dealing with models of computation like term and graph rewriting. The use of species gives direct access to complexity issues, by the use of the associated generating series.
Prakash Panangaden & Rick Blute, Quantum physics and discrete causal dynamics
One of the subtle and surprising features of quantum systems is the presence of non-local correlations. The famous EPR thought experiment first led to the idea that it would be possible to have spatially separated systems with correlations. This phenomenon - commonly called "entanglement" - is a key ingredient in many quantum algorithms. At first sight it appears that one is dangerously close to causality violation. In this talk we review the basic ideas of quantum mechanics and discuss the phenomenon of entanglement. We consider a discrete setting and consider the problem of defining causal evolution while not restricting the theory so much as to exclude the possibility of entanglement. We present out own proposal for causal evolution and show how it resolves the tension between causality and nonlocality. The following talk by Richard Blute gives a categorical description of causal evolution.
Eric Goubault, Introduction to algebraic topology and homotopy of parallel computation
This course will present some recent ideas and results about the modelisation of the flow of execution of concurrent processes. One important motivation is that most of the interesting computer-scientific properties (relating to coordination of processes) can be read from the geometry of this space of states. In particular, some methods from algebraic topology (fundamental group, homology etc.) are natural and useful in this set up. The only trouble is that the usual homotopy framework has to be slightly changed in order to take into account the direction of time. Most of this course will be focused on this framework and will hinge around applications of geometrical (algebraic topological) ideas and modeling to computer scientific problems, among which are: - static analysis of concurrent programs (mostly Java-like threads); in particular, deadlocks, reachable states, scheduling properties (joint work with Martin Raussen and Lisbeth Fajstrup), state-space reduction techniques. - scheduling properties of distributed systems; in particular serialisability conditions for databases (after Jeremy Gunawardena and Martin Raussen), computability in fault-tolerant systems (after Maurice Herlihy and Sergio Rajsbaum). - relationship with other models for concurrency (for instance transition systems and asynchronous automata).