\documentclass[11pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{indentfirst} \usepackage{enumerate} \usepackage{cite} \usepackage{caption} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} % Custom packages \usepackage{leftrule_theorems} \usepackage{my_listings} \usepackage{my_hyperref} \usepackage{math} \usepackage{concurgames} \newcommand{\qtodo}[1]{\colorbox{orange}{\textcolor{blue}{#1}}} \newcommand{\todo}[1]{\colorbox{orange}{\qtodo{\textbf{TODO:} #1}}} \newcommand{\qnote}[1]{\colorbox{Cerulean}{\textcolor{Sepia}{[#1]}}} \newcommand{\note}[1]{\qnote{\textbf{NOTE:} #1}} \author{Théophile \textsc{Bastian}, supervised by Glynn \textsc{Winskel} and Pierre \textsc{Clairambault} \\ \begin{small}Cambridge University\end{small}} \title{Internship report\\Concurrent games as event structures} \date{June-July 2016} \begin{document} \maketitle \todo{abstract} \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} \paragraph{Game semantics} are a kind of denotational semantics in which a program's behavior is abstracted as a two-players game, in which Player plays for the program and Opponent plays for the environment of the program (the user, the operating system, \ldots). The execution of a program, in this formalism, is then represented as a succession of moves. For instance, the user pressing a key on the keyboard would be a move of Opponent, to which Player could react by triggering the corresponding action (\eg{} adding the corresponding letter in a text field). Game semantics emerged mostly with~\cite{hyland2000pcf} and~\cite{abramsky2000pcf}, independently establishing a fully-abstract model for PCF using game semantics, while ``classic'' semantics had failed to provide a fully-abstract, reasonable and satisfying model. But this field mostly gained in notoriety with the development of techniques to capture imperative programming languages constructions, among which references handling~\cite{abramsky1996linearity}, followed by higher-order references~\cite{abramsky1998references}, allowing to model languages with side effects; or exception handling~\cite{laird2001exceptions}. Since then, the field has been deeply explored, providing a wide range of such constructions in the literature. A success of game semantics is to provide \emph{compositional} and \emph{syntax-free} semantics. Syntax-free, because representing a program as a strategy on a game totally abstracts it from the original syntax of the programming language, representing only the behavior of a program reacting to its execution environment, which is often desirable in semantics. Compositional, because game semantics are usually defined by induction over the syntax, thus easily composed. For instance, it is worth noting that the application of one term to another is represented as the \emph{composition} of the two strategies. \paragraph{Concurrency in game semantics.} In the continuity of the efforts put forward to model imperative primitives in game semantics, it was natural to focus at some point on modelling concurrency. The problem was tackled by \fname{Laird}~\cite{laird2001game}, introducing game semantics for a \lcalc{} with a few additions, as well as a \emph{parallel execution} operator and communication on channels. It is often considered, though, that \fname{Ghica} and \fname{Murawski}~\cite{ghica2004angelic} really took the fundamental step by defining game semantics for a fine-grained concurrent language including parallel execution of ``threads'' and low-level semaphores --- a way more realistic approach to the problem. However, both of these constructions are based on \emph{interleavings}. That is, they model programs on \emph{tree-like games}, games in which the moves that a player is allowed to play at a given point are represented as a tree (\eg, in a state $A$, Player can play the move $x$ by following an edge of the tree starting from $A$, thus reaching $B$ and allowing Opponent to play a given set of moves --- the outgoing edges of $B$). The concurrency is then represented as the \emph{interleaving} of all possible sequences of moves, in order to reach a game tree in which every possible ``unordered'' (\ie, that is not enclosed in any kind of synchronisation block, as with semaphores) combination of moves is a valid path. However, this approach introduces non-determinism in the strategies: if two moves are available to a player, the model states that they make a non-deterministic uniform choice. Yet, one could reasonably argue that a program should behave consistently with the environment, which would mean that the semantics of a program --- even a concurrent one --- should still be deterministic. This idea was explored outside of the game semantics context, for instance by~\cite{reynolds1978syntactic}, establishing a type-checking system to restrict concurrent programs to deterministic ones. Some recent work makes use of linear logic~\cite{caires2010session} for similar purposes as well. Yet, the interleavings game semantics of these languages remains non-deterministic. \paragraph{The purpose of this internship} was to try to take a first step towards the reunification of those two developments. For that purpose, my objective was to give a \emph{deterministic} game semantics to a linear lambda-calculus enriched with parallel and sequential execution operators, as well as synchronization on channels. In order to model this, I used the games as \emph{event structures} formalism, described later on and introduced in~\cite{rideau2011concurrent} by S. \fname{Rideau} and G. \fname{Winskel}. Roughly, event structures represent a strategy as a \emph{partial order} on the moves, stating that move $x$ can only be played after move $y$, which is more flexible than tree-like game approaches. Although a full-abstraction result could not be reached --- but is not so far away ---, I have proved the \emph{adequacy} of the operational and denotational semantics, and have obtained an implementation of the (denotational) game semantics, that is, code that translates a term of the language into its corresponding strategy. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{A linear \lcalc{} with concurrency primitives: \llccs} The language on which my internship was focused was meant to be simple, easy to parse and easy to work on both in theory and on the implementation. It should of course include concurrency primitives. For these reasons, we chose to consider a variant of CCS~\cite{milner1980ccs} --- a simple standard language including parallel and sequential execution primitives, as well as synchronization of processes through \emph{channels} ---, lifted up to the higher order through a \lcalc. The language was then restricted to a \emph{linear} one --- that is, each identifier declared must be referred to exactly once ---, partly to keep the model simple, partly to meet the determinism requirements. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{A linear variant of CCS~: \linccs} The variant of CCS we chose to use has two base types: \emph{processes}~($\proc$) and \emph{channels}~($\chan$). It has two base processes, $0$ (failure) and $1$ (success), although a process can be considered ``failed'' without reducing to $0$ (in case of deadlock). \begin{figure}[h] \begin{minipage}[t]{0.60\textwidth} \begin{center}Terms\end{center}\vspace{-1em} \begin{align*} t,u,\ldots ::=~&1 & \text{(success)}\\ \vert~&0 & \text{(error)}\\ \vert~&t \parallel u & \text{(parallel)}\\ \vert~&t \cdot u & \text{(sequential)}\\ \vert~&(\nu a) t & \text{(new channel)} \end{align*} \end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth} \begin{center}Types\end{center}\vspace{-1em} \begin{align*} A,B,\ldots ::=~&\proc & \text{(process)} \\ \vert~&\chan & \text{(channel)} \end{align*} \end{minipage} \caption{\linccs{} terms and types}\label{fig:lccs:def} \end{figure} The syntax is pretty straightforward to understand: $0$ and $1$ are base processes; $\parallel$ executes in parallel its two operands; $\cdot$ executes sequentially its two operands (or synchronizes on a channel if its left-hand operand is a channel); $(\nu a)$ creates a new channel $a$ on which two processes can be synchronized. Here, the ``synchronization'' simply means that a call to the channel is blocking until its other end has been called as well. The language is simply typed as in figure~\ref{fig:lccs:typing}. Note that binary operators split their environment between their two operands, ensuring that each identifier is used at most once, and that no rules (in particular the axiom rules) ``forget'' any part of the environment, ensuring that each identifier is used at least once. \begin{figure}[h] \begin{align*} \frac{~}{\,\vdash 0:\proc} & (\textit{Ax}_0) & \frac{~}{\,\vdash 1:\proc} & (\textit{Ax}_1) & \frac{~}{t:A \vdash t:A} & (\textit{Ax}) & \frac{\Gamma, a:\chan, \bar{a}:\chan \vdash P : \proc} {\Gamma \vdash (\nu a) P : \proc} & (\nu) \end{align*}\vspace{-1.5em}\begin{align*} \frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc} {\Gamma,\Delta \vdash P \parallel Q : \proc} & (\parallel) & \frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc} {\Gamma,\Delta \vdash P \cdot Q : \proc} & (\cdot_\proc) & \frac{\Gamma \vdash P : \proc} {\Gamma,a:\chan \vdash a \cdot P: \proc} & (\cdot_\chan) \end{align*} \vspace{-1.5em} \caption{\linccs{} typing rules}\label{fig:lccs:typing} \end{figure} We also equip this language with operational semantics, in the form of a labeled transition system (LTS), as described in figure~\ref{fig:lccs:opsem}, where $a$ denotes a channel and $x$ denotes any possible label. \begin{figure}[h] \begin{align*} \frac{~}{a \cdot P \redarrow{a} P} & & \frac{~}{1 \parallel P \redarrow{\tau} P} & & \frac{~}{1 \cdot P \redarrow{\tau} P} & & \frac{P \redarrow{\tau_c} Q} {(\nu a) P \redarrow{\tau} Q} & (c \in \set{a,\bar{a}})& \frac{P \redarrow{a} P'\quad Q \redarrow{\bar{a}} Q'} {P \parallel Q \redarrow{\tau_a} P' \parallel Q'} \end{align*}\begin{align*} \frac{P \redarrow{x} P'} {P \parallel Q \redarrow{x} P' \parallel Q} & & \frac{Q \redarrow{x} Q'} {P \parallel Q \redarrow{x} P \parallel Q'} & & \frac{P \redarrow{x} P'} {P \cdot Q \redarrow{x} P' \cdot Q} & & \frac{P \redarrow{x} P'} {(\nu a)P \redarrow{x} (\nu a)P'} & (a \not\in \set{x,\tau_a}) \end{align*} \caption{\linccs{} operational semantics}\label{fig:lccs:opsem} \end{figure} We consider that a term $P$ \emph{converges} whenever $P \redarrow{\tau}^\ast 1$, and we write $P \Downarrow$. The $\tau_a$ reduction scheme may sound a bit unusual. It is, however, necessary. Consider the reduction of $(\nu a) (a \cdot 1 \parallel \bar{a} \cdot 1)$: the inner term $\tau_a$-reduces to $1$, thus allowing the whole term to reduce to $1$; but if we replaced that $\tau_a$ with a $\tau$, the whole term would reduce to $(\nu a) 1$, which has no valid types since $a$ and $\bar{a}$ are not consumed. Our semantics would then not satisfy subject reduction. Switching to an \emph{affine} \linccs{} while keeping it wrapped in a \emph{linear} \lcalc{} was considered, but yielded way too much problems, while switching to a fully-affine model would have modified the problem too deeply. \subsection{Lifting to the higher order: linear \lcalc} In order to reach the studied language, \llccs, we have to lift up \linccs{} to a \lcalc. To do so, we add to the language the constructions of figure~\ref{fig:llam:syntax}, which are basically the usual \lcalc{} constructions slightly transformed to be linear (which is mostly reflected by the typing rules). In particular, the only base types are only $\proc$ and $\chan$. \begin{figure}[h] \begin{minipage}[t]{0.55\textwidth} \begin{center}Terms\end{center}\vspace{-1em} \begin{align*} t,u,\ldots ::=~&x \in \mathbb{V} & \text{(variable)}\\ \vert~&t~u & \text{(application)}\\ \vert~&\lambda x^A \cdot t & \text{(abstraction)}\\ \vert~&\text{\linccs}\textit{ constructions} & \end{align*} \end{minipage} \hfill \begin{minipage}[t]{0.40\textwidth} \begin{center}Types\end{center}\vspace{-1em} \begin{align*} A,B,\ldots ::=~&A \linarrow B & \text{(linear arrow)}\\ \vert~&\proc~\vert~\chan & \text{(\linccs)} \end{align*} \end{minipage} \caption{Linear \lcalc{} terms and types}\label{fig:llam:syntax} \end{figure} To enforce the linearity, the only typing rules of the usual \lcalc{} that have to be changed are the $(\textit{Ax})$ and $(\textit{App})$ presented in figure~\ref{fig:llam:typing}. The $(\textit{Abs})$ rule is the usual one. \begin{figure}[h] \begin{align*} \frac{~}{x : A \vdash x : A} & (\textit{Ax}) & \frac{\Gamma \vdash t : A \linarrow B \quad \Delta \vdash u : A} {\Gamma,\Delta \vdash t~u : B} & (\textit{App}) & \frac{\Gamma, x : A \vdash t : B} {\Gamma \vdash \lambda x^{A} \cdot t : A \linarrow B} & (Abs) \end{align*} \caption{Linear \lcalc{} typing rules}\label{fig:llam:typing} \end{figure} The linearity is here guaranteed: in the (\textit{Ax}) rule, the environment must be $x:A$ instead of the usual $\Gamma, x:A$, ensuring that each variable is used \emph{at least once}; while the environment split in the binary operators' rules ensures that each variable is used \emph{at most once} (implicitly, $\Gamma \cap \Delta = \emptyset$). To lift the operational semantics to \llccs, we only need to add one rule: \[ \frac{P \longrightarrow_\beta P'}{P \redarrow{\tau} P'} \] %%%%%%%%%%%%%%%%%%%%% \subsection{Examples} \todo{Examples} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{A games model} Our goal is now to give a games model for the above language. For that purpose, we will use \emph{event structures}, providing an alternative formalism to the often-used tree-like games. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{The event structures framework} The traditional approach to concurrent games is to represent them as \emph{tree-like games}. If the considered game consists in three moves, namely $A$, $B$ and $C$, where $A$ can be played by Opponent and the others by Player \emph{after} Opponent has played $A$, that means that the states of the game will be $\epsilon$, $A$, $A \cdot B$ and $A \cdot C$, which corresponds to the game tree \[ \begin{tikzpicture} \node (1) [ellipse] {A} ; \node (2) [below left of=1, ellipse] {B}; \node (3) [below right of=1, ellipse] {C}; \path [->] (1) edge (2) edge (3); \end{tikzpicture} \] This can of course be used to describe much larger games, and is often useful to reason concurrently, since the causal histories appear clearly: the possible states of the game can be read easily by concatenating the events found along a path from the root of the tree. But it also has the major drawback of growing exponentially in size: let us consider a game in which Opponent must play $A$ and $B$ in no particular order before Player can play $C$. The corresponding tree-like game would be \[ \begin{tikzpicture} \node (11) {$A_1$}; \node (21) [below of=11] {$B_1$}; \node (31) [below of=21] {$C_1$}; \node (22) [right of=11] {$B_2$}; \node (12) [below of=22] {$A_2$}; \node (32) [below of=12] {$C_2$}; \path [->] (11) edge (21) (21) edge (31) (22) edge (12) (12) edge (32); \end{tikzpicture} \] This goes even worse with less structure: since there is $n!$ % chktex 40 permutations for $n$ elements, the tree can grow way bigger. This problem motivated the use of \emph{event structures} as a formalism to describe such games~\cite{rideau2011concurrent}. Informally, an event structure is a partial order $\leq$ on \emph{events} (here, the game's moves), alongside with a \emph{consistency} relation. The purpose of the consistency relation is to describe non-determinism, in which we are not interested here, since we seek a deterministic model: in all the following constructions, I will omit the consistency set. The original constructions including it can be found for instance in~\cite{castellan2016concurrent,winskel1986event}. The partial order $e_1 \leq e_2$ means that $e_1$ must have been played before $e_2$ can be played. For instance, the Hasse diagram of the previous game would look like \[ \begin{tikzpicture} \node (1) {A}; \node (2) [right of=1] {B}; \node (3) [below left of=1, below right of=2] {C}; \path[->] (1) edge (3) (2) edge (3); \end{tikzpicture} \] %%%%% \subsubsection{Event structures} \begin{definition}[event structure] An \emph{event structure}~\cite{winskel1986event} is a poset $(E, \leq_E)$, where $E$ is a set of \emph{events} and $\leq_E$ is a partial order on $E$ such that for all $e \in E$, $\downclose{e} \eqdef \set{e' \in E~\vert~e' \leq_E e}$ is finite. The partial order $\leq_E$ naturally induces a binary relation $\edgeArrow$ over $E$ that is defined as the transitive reduction of $\leq_E$. \end{definition} In this context, the right intuition of event structures is a set of events that can occur, the players' moves, alongside with a partial order stating that a given move cannot occur before another move. Event structures are often represented as a directed acyclic graph (DAG) where the vertices are the elements of $E$ and the edges are the transitive reduction of $\leq_E$. \begin{definition}[event structure with polarities] An \emph{event structure with polarities} (\textit{ESP}) is an event structure $(E, \leq_E, \rho)$, where $\rho : E \to \set{+,-}$ is a function associating a \emph{polarity} to each event. \end{definition} In order to model games, this is used to represent whether a move is to be played by Player or Opponent. To represent polarities, we will often use colors instead of $+$ and $-$ signs: a red-circled event will have a negative polarity, \ie{} will be played by Opponent, while a green-circled one will have a positive polarity. The ESP of the previous example would then be \[ \begin{tikzpicture} \node (1) [draw=red,ellipse] {A}; \node (3) [draw=green,ellipse,right of=1] {C}; \node (2) [draw=red,ellipse,right of=3] {B}; \path[->] (1) edge (3) (2) edge (3); \end{tikzpicture} \] \begin{definition}[configuration] A \emph{configuration} of an ESP $A$ is a finite subset $X \subseteq A$ that is \emph{down-closed}, \ie{} \vspace{-0.5em} \[ {\forall x \in X}, {\forall e \in A}, {e \leq_A x} \implies {e \in X}.\] $\config(A)$ is the set of configurations of $A$. \end{definition} A configuration can thus be seen as a valid state of the game. $\config(A)$ plays a major role in definitions and proofs on games and strategies. \begin{notation} For $x,y \in \config(A)$, $x \forkover{e} y$ states that $y = x \sqcup \set{e}$ (and that both are valid configurations), where $\sqcup$ denotes the disjoint union. It is also possible to write $x \forkover{e}$, stating that $x \sqcup \set{e} \in \config(A)$, or $x \fork y$. \end{notation} %%%%% \subsubsection{Concurrent games} \begin{definition}[game] A \emph{game} $A$ is an event structure with polarities. \\ The dual game $A^\perp$ is the game $A$ where all the polarities in $\rho$ have been reversed. \end{definition} For instance, one could imagine a game modeling the user interface of a coffee machine: Player is the coffee machine, while Opponent is a user coming to buy a drink. \begin{example}[Process game] We can represent a process by the following game: \[ \begin{tikzpicture} \node (1) at (0,0) [draw=red,ellipse] {call}; \node (2) at (2,0) [draw=green,ellipse] {done}; \path[->] (1) edge (2); \end{tikzpicture} \] The ``call'' event will be triggered by Opponent (the system) when the process is started, and Player will play ``done'' when the process has finished, if it ever does. The relation $\text{call} \leq \text{done}$ means that a process cannot finish \emph{before} it is called. \end{example} \begin{definition}[pre-strategy] A \emph{pre-strategy} on the game $A$, $\sigma: A$, is an ESP such that \begin{enumerate}[(i)] \item $\sigma \subseteq A$; \item $\config(\sigma) \subseteq \config(A)$; \item $\forall s \in \sigma, \rho_A(s) = \rho_\sigma(s)$ \end{enumerate} \end{definition} \begin{example}[processes, cont.] A possible \emph{pre-strategy} for the game consisting in two processes put side by side (in which the game's events are annotated with a number to distinguish the elements of the two processes) would be \[ \begin{tikzpicture} \node (1) at (0,1.2) [draw=red,ellipse] {call$_0$}; \node (2) at (0,0) [draw=green,ellipse] {done$_0$}; \node (3) at (2,1.2) [draw=red,ellipse] {call$_1$}; \path[->] (1) edge (2) (3) edge (2); \end{tikzpicture} \] This pre-strategy is valid: it is a subset of the game that does not include $\text{call}_1$, but it does include both $\text{call}_0$ and $\text{done}_0$ and inherits the game's partial order. This would describe two processes working in parallel. The process $0$ waits before the process $1$ is called to terminate, and the process $1$ never returns. \end{example} But as it is defined, a pre-strategy does not exactly capture what we expect of a \emph{strategy}: it is too expressive. For instance, the relation $\text{call}_0 \leq \text{call}_1$ on the above strategy is allowed, stating that the operating system cannot decide to start the process $1$ before the process $0$. It is not up to the program to decide that, this strategy is thus unrealistic. We then have to restrict pre-strategies to \emph{strategies}: \begin{definition}[strategy] A \emph{strategy} is a pre-strategy $\sigma : A$ that ``behaves well'', \ie{} that is \begin{enumerate}[(i)] \item\label{def:receptive} \textit{receptive}: for all $x \in \config(A)$ \st{} $x \forkover{e^-}$, $e \in \sigma$; \item\label{def:courteous} \textit{courteous}: $\forall x \edgeArrow_\sigma x' \in \sigma$, $(\rho(x),\rho(x')) \neq (-,+) \implies x \edgeArrow_A x'$. \end{enumerate} \end{definition} (\ref{def:receptive}) captures the idea that we cannot prevent Opponent from playing one of its moves. Indeed, not including an event in a strategy means that this event \emph{will not} be played. It is unreasonable to consider that a strategy could forbid Opponent to play a given move. (\ref{def:courteous}) states that unless a dependency relation is imposed by the games' rules, one can only make one of its moves depend on an Opponent move, \ie{} every direct arrow in the partial order that is not inherited from the game should be ${\ominus \edgeArrow \oplus}$. Clearly, it is unreasonable to consider an arrow ${\ostar \edgeArrow \ominus}$, which would mean forcing Opponent to wait for a move (either from Player or Opponent) before playing their move; but ${\oplus \edgeArrow \oplus}$ is also unreasonable, since we're working in a concurrent context. Intuitively, one could think that when playing $x$ then $y$, it is undefined whether Opponent will receive $x$ then $y$ or $y$ then $x$. %%%%% \subsubsection{Operations on games and strategies} \todo{Better progression in this part.} In order to manipulate strategies and define them by induction over the syntax, the following operations will be extensively used. It may also be worth noting that in the original formalism~\cite{castellan2016concurrent}, games, strategies and maps between them form a bicategory in which these operations play special roles. In this whole section, unless stated otherwise, $E$ and $F$ denotes ESPs, $A$, $B$ and $C$ denotes games, $\sigma: A$ and $\tau: B$ denotes strategies. \begin{definition}[Parallel composition] The \emph{parallel composition} $E \parallel F$ of two ESPs is an ESP whose events are $\left(\set{0} \times E\right) \sqcup \left(\set{1} \times F\right)$ (the disjoint tagged union of the events of $E$ and $F$), and whose partial order is $\leq_E$ on $E$ and $\leq_F$ on $F$, with no relation between elements of $E$ and $F$. One can then naturally expand this definition to games (by preserving polarities) and to strategies. \end{definition} In the example before, when talking of ``two processes side by side'', we actually referred formally to the parallel composition of two processes. \smallskip Given two strategies on dual games $A$ and $A^\perp$, it is natural and interesting to compute their \emph{interaction}, that is, ``what will happen if one strategy plays against the other''. \begin{definition}[Closed interaction] Given two strategies $\sigma : A$ and $\tau : A^\perp$, their \emph{interaction} $\sigma \wedge \tau$ is the ESP $\sigma \cap \tau \subseteq A$ from which causal loops have been removed. More precisely, $\sigma \cap \tau$ is a set adjoined with a \emph{preorder} ${(\leq_\sigma \cup \leq_\tau)}^\ast$ (transitive closure) that may not respect antisymmetry, that is, may have causal loops. The event structure $\sigma \wedge \tau$ is then obtained by removing all the elements contained in such loops from $\sigma \cup \tau$. \end{definition} \textit{This construction is a simplified version of the analogous one from~\cite{castellan2016concurrent} (the pullback), taking advantage of the fact that our event structures are deterministic --- that is, without a consistency set.} This indeed captures what we wanted: $\sigma \wedge \tau$ contains the moves that both $\sigma$ and $\tau$ are ready to play, including both orders, except for the events that can never be played because of a ``deadlock'' (\ie{} a causal loop). \smallskip We might now try to generalize that to an \emph{open} case, where both strategies don't play on the same games, but only have a common part. Our objective here is to \emph{compose} strategies: indeed, a strategy on $A^\perp \parallel B$ can be seen as a strategy \emph{from $A$ to $B$}, playing as Opponent on a board $A$ and as Player on a board $B$. This somehow looks like a function, that could be composed with another strategy on $B^\perp \parallel C$. \begin{definition}[Compositional interaction] Given two strategies $\sigma : A^\perp \parallel B$ and $\tau : B^\perp \parallel C$, their \emph{compositional interaction} $\tau \strInteract \sigma$ is an event structure defined as $(\sigma \parallel C) \wedge (A^\perp \parallel \tau)$, where $A^\perp$ and $C$ are seen as strategies. \end{definition} The idea is to put in correspondence the ``middle'' states (those of $B$) while adding ``neutral'' states for $A$ and $C$. $\tau \strInteract \sigma$ is an \emph{event structure} (\ie, without polarities): indeed, the two strategies disagree on the polarities of the middle part. Alternatively, it can be seen as an ESP with a polarity function over $\set{+,-,?}$. From this point, the notion of composition we sought is only a matter of ``hiding'' the middle part: \begin{definition}[Strategies composition] Given two strategies $\sigma : A^\perp \parallel B$ and $\tau : B^\perp \parallel C$, their \emph{composition} $\tau \strComp \sigma$ is the ESP $(\tau \strInteract \sigma) \cap (A^\perp \parallel C)$, on which the partial order is the restriction of $\leq_{\tau \strInteract \sigma}$ and the polarities those of $\sigma$ and $\tau$. \end{definition} It is then useful to consider an identity strategy \wrt{} the composition operator. This identity is called the \emph{copycat} strategy: \begin{definition}[Copycat] The \emph{copycat strategy} of a game $A$, $\cc_A$, is the strategy on the game $A^\perp \parallel A$ whose events are $A^\perp \parallel A$ wholly, on which the order is the transitive closure of $\leq_{A^\perp \parallel A} \cup \set{ (1-i, x) \leq (i, x) \vert x \in A~\&~\rho((i,x)) = \oplus}$. \end{definition} The copycat strategy of a game is indeed an identity for the composition of \emph{strategies}. In fact, it even holds that for a \emph{pre-}strategy $\sigma : A$, $\sigma$ is a strategy $\iff$ $\cc_A \strComp \sigma = \sigma$. \begin{example}[copycat] If we consider the following game $A$ \[ \begin{tikzpicture} \node (1) [draw=red,ellipse] {A}; \node (3) [draw=green,ellipse,right of=1] {C}; \node (2) [draw=red,ellipse,right of=3] {B}; \path[->] (3) edge (1); \end{tikzpicture} \] its copycat strategy $\cc_A$ is \[ \begin{tikzpicture} \node (01) {($A^\perp$)}; \node (02) [below of=01] {($A$)}; \node (11) [draw=green,ellipse,right of=01] {A}; \node (31) [draw=red,ellipse,right of=11] {C}; \node (21) [draw=green,ellipse,right of=31] {B}; \node (12) [draw=red,ellipse,right of=02] {A}; \node (32) [draw=green,ellipse,right of=12] {C}; \node (22) [draw=red,ellipse,right of=32] {B}; \path[->] (12) edge (11) (22) edge (21) (31) edge (32) (31) edge (11) (32) edge (12); \end{tikzpicture} \] \end{example} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Interpretation of \llccs} We can now equip \llccs{} with denotational semantics, interpreting the language as strategies as defined in figure~\ref{fig:llccs:interp}. \begin{figure}[h] \begin{minipage}[t]{0.45\textwidth} \begin{align*} \seman{x^A} &\eqdef \cc_{\seman{A}} \\ \seman{t^{A \linarrow B}~u^{A}} &\eqdef \cc_{A \linarrow B} \strComp \left( \seman{t} \parallel \seman{u} \right) \\ \seman{\lambda x^A \cdot t} &\eqdef \seman{t} \end{align*} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \begin{align*} \seman{\alpha} &\eqdef \ominus \\ \seman{A \linarrow B} &\eqdef \seman{A}^\perp \parallel \seman{B} \\ \end{align*}\end{minipage} \begin{align*} \seman{P \parallel Q} &\eqdef \left( \begin{tikzpicture}[baseline, scale=0.8] \node (4) at (0,0.65) [draw=green,ellipse] {call $P$}; \node (5) at (0,-0.65) [draw=red,ellipse] {done $P$}; \node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$}; \node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$}; \node (0) at (5,0.65) [draw=red,ellipse] {call}; \node (1) at (5,-0.65) [draw=green,ellipse] {done}; \path[->] (0) edge (1) edge [bend right] (2) edge [bend right] (4) (2) edge (3) (4) edge (5) (3) edge [bend right] (1) (5) edge [bend right] (1); \end{tikzpicture} \right) \strComp \left(\seman{P} \parallel \seman{Q}\right) & \seman{\proc} = \seman{\chan} &\eqdef \begin{tikzpicture}[baseline] \node (1) at (0,0.5) [draw=red,ellipse] {call}; \node (2) at (0,-0.5) [draw=green,ellipse] {done}; \draw [->] (1) -- (2); \end{tikzpicture} \\ %%%%%%%%%%%%%%%%%%%%%%%%% \seman{P \cdot Q} &\eqdef \left( \begin{tikzpicture}[baseline,scale=0.8] \node (4) at (0,0.65) [draw=green,ellipse] {call $P$}; \node (5) at (0,-0.65) [draw=red,ellipse] {done $P$}; \node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$}; \node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$}; \node (0) at (5,0.65) [draw=red,ellipse] {call}; \node (1) at (5,-0.65) [draw=green,ellipse] {done}; \path[->] (0) edge (1) edge [bend right] (4) (2) edge (3) (4) edge (5) (3) edge [bend right] (1) (5) edge (2); \end{tikzpicture} \right) \strComp \left(\seman{P} \parallel \seman{Q}\right) & \seman{1} &\eqdef \begin{tikzpicture}[baseline] \node (1) at (0,0.5) [draw=red,ellipse] {call}; \node (2) at (0,-0.5) [draw=green,ellipse] {done}; \draw [->] (1) -- (2); \end{tikzpicture} \\ %%%%%%%%%%%%%%%%%%%%%%%%% \seman{(a : \chan) \cdot P} &\eqdef \left( \begin{tikzpicture}[baseline,scale=0.8] \node (4) at (0,0.65) [draw=green,ellipse] {call $P$}; \node (5) at (0,-0.65) [draw=red,ellipse] {done $P$}; \node (2) at (2.5,0.65) [draw=green,ellipse] {call $a$}; \node (3) at (2.5,-0.65) [draw=red,ellipse] {done $a$}; \node (0) at (5,0.65) [draw=red,ellipse] {call}; \node (1) at (5,-0.65) [draw=green,ellipse] {done}; \path[->] (0) edge (1) edge [bend right] (2) (2) edge (3) (4) edge (5) (3) edge (4) (5) edge [bend right] (1); \end{tikzpicture} \right) \strComp \left(\seman{P} \parallel \seman{a}\right) & \seman{0} &\eqdef \begin{tikzpicture}[baseline] \node (1) at (0,0.2) [draw=red,ellipse] {call}; \end{tikzpicture} \\ %%%%%%%%%%%%%%%%%%%%%%%%% \seman{(\nu a) P} &\eqdef \left( \begin{tikzpicture}[baseline,scale=0.8] \node (6) at (0,0.65) [draw=green,ellipse] {call $a$}; \node (7) at (0,-0.65) [draw=red,ellipse] {done $a$}; \node (4) at (2.5,0.65) [draw=green,ellipse] {call $\bar{a}$}; \node (5) at (2.5,-0.65) [draw=red,ellipse] {done $\bar{a}$}; \node (2) at (5,0.65) [draw=green,ellipse] {call $P$}; \node (3) at (5,-0.65) [draw=red,ellipse] {done $P$}; \node (0) at (7.5,0.65) [draw=red,ellipse] {call}; \node (1) at (7.5,-0.65) [draw=green,ellipse] {done}; \path[->] (0) edge (1) edge [bend right] (2) (2) edge (3) (3) edge [bend right] (1) (4) edge (5) edge (7) (6) edge (7) edge (5); \end{tikzpicture} \right) \strComp \seman{P} & \end{align*} \caption{\llccs{} interpretation as strategies}\label{fig:llccs:interp} \end{figure} %%%%%%%%%%%%%%%%%%%%% \subsection{Adequacy} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Implementation of deterministic concurrent games} \hfill\href{https://github.com/tobast/cam-strategies/} {\includegraphics[height=2em]{github32.png}~\raisebox{0.5em}{Github repository}} \vspace{1em} The first part of my internship mostly consisted --- apart from understanding the bibliography and the underlying concepts --- in the implementation of operations on \emph{deterministic} concurrent games, that is, concurrent games as event structures without conflicts. The work had to be done from scratch, as no one implemented this before. This implementation aims to provide \begin{enumerate}[(i)] \item a --- more or less --- convenient way to input games/strategies; \item basic operations over those games and strategies: parallel composition, pullback, interaction, composition, copycat, \ldots; \item a clean display as a Dot graph. \end{enumerate} \subsection{Structures} The implementation aims to stay as close as possible to the mathematical model, while still providing quite efficient operations. As we do not handle non-determinism, an event structure can be easily represented as a DAG in memory. The actual representation that was chosen is a set of nodes, each containing (as well as a few other information) a list of incoming and outgoing edges. A \emph{game} is, in the literature, a simple ESP\@. However, to provide interaction and composition operations, we have to somehow keep track of the parallel compositions that were used to reach this game: if the user wants to compose strategies on $A \strParallel B$ and $B \strParallel C$, we have to remember that those games were indeed parallel compositions of the right games, and not just a set where the events from, \eg, $A$ and $B$ are mixed. \\ This information is kept in a tree, whose leaves are the base games that were put in parallel, and whose nodes represent a parallel composition operation. Finally, a \emph{strategy} consists in a game and an ESP (the strategy itself), plus a map from the nodes of the strategy to the nodes of the game. This structure is really close to the mathematical definition of a strategy, and yet is only a lesser loss in efficiency. \subsection{Operations} The usual operations on games and strategies, namely \emph{parallel composition}, \emph{pullback}, \emph{interaction} and \emph{composition} are implemented in a very modular way: each operation is implemented in a functor, whose arguments are the other operations it makes use of, each coming with its signature. Thus, one can simply \lstocaml{open Operations.Canonical} to use the canonical implementation, or define its own implementation, build it into an \lstocaml{Operations} module (which has only a few lines of code) and then open it. This is totally transparent to the user, who can use the same infix operators. \subsubsection{Parallel composition} While the usual construction (\cite{castellan2016concurrent}) involves defining the events of $A \strParallel B$ as ${\set{0} \times A} \cup {\set{1} \times B}$, the parallel composition of two strategies is here simply represented as the union of both event structures, while altering the composition tree. \subsubsection{Pullback} Given two strategies on the same game, the pullback operation attempts to extract a ``common part'' of those two strategies. Intuitively, the pullback of two strategies is ``what happens'' if those two strategies play together. The approach that was implemented (and that is used as \lstocaml{Pullback.Canonical}) is a \emph{bottom-up} approach: iteratively, the algorithm looks for an event that has no dependencies in both strategies, adds it and removes the satisfied dependencies.\\ One could also imagine a \emph{top-bottom} approach, where the algorithm starts working on the merged events of both strategies, then looks for causal loops and removes every event involved. \subsubsection{Interaction} Once the previous operations are implemented, \emph{interaction} is easily defined as in the literature (\cite{castellan2016concurrent}) and nearly is a one-liner. \subsubsection{Composition} Composition is also quite easy to implement, given the previous operations. The only difficulty is that hiding the central part means computing the new $\edgeArrow$ relation (that is the transitive reduction of $\leq$), which means computing the transitive closure of the interaction, hiding the central part and then computing the transitive reduction of the DAG\@. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Linear lambda-calculus} Concurrent games can be used as a model of lambda-calculus. To keep the strategies finite and to avoid non-determinism, and to have a somehow easier approach, one can use concurrent games as a model of \emph{linear} lambda-calculus, that is, a variant of the simply-typed lambda-calculus where each variable in the environment can and must be used exactly once. \subsection{Definition} The linear lambda calculus we use has the same syntax as the usual simply-typed lambda calculus with type annotations: \medskip Only the following typing rules differ from the usual rules and are worth noting: Note that in~(\ref{typ:llam:ax}), the left part is $x : A$ and not (as usual) $\Gamma, x:A$. This ensures that each defined variable present in the environment will be used. The implicit condition $\Gamma \cap \Delta = \emptyset$ in~(\ref{typ:llam:app}) ensures that each defined variable will be used at most once. The terms can then be interpreted as strategies through the $\seman{\cdot}$ operator defined as in figure~\ref{fig:llam:interp}. The $\ominus$ stands for a game whose only event is negative. The interpretation operator maps a type to a game and a term to a strategy playing on the game associated to its type, put in parallel with its environment's dual. For instance, if $x:A \vdash t:B$, the strategy $\seman{t}$ will play on $\seman{A}^\perp \parallel \seman{B}$. This explains the definition of $\seman{\lambda x^A \cdot t}$: $\seman{t}$ plays on $\seman{A}^\perp \parallel \seman{B}$, same as $\seman{\lambda x^A \cdot t}$. \subsection{Implementation} The implementation, which was supposed to be fairly simple, turned out to be not as straightforward as expected due to technical details: while, in the theory, the parallel composition is obviously associative and commutative (up to isomorphism), and thus used as such when dealing with environment and typing rules, things get a bit harder in practice when one is supposed to provide the exact strategy. For instance, the above rule~(\ref{typ:llam:app}) states that the resulting environment is $\Gamma,\Delta$, while doing so in the actual implementation (that is, simply considering $\seman{\Gamma} \strParallel \seman{\Delta}$) turns out to be a nightmare: it is better to keep the environment ordered by the variables introduction order, thus intertwining $\Gamma$ and $\Delta$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Linear \lccs} %%%%%%%%%%%%%%%%%%%%%%%%%%%% \bibliography{biblio} \bibliographystyle{ieeetr} \end{document}