L3-internship-report/report.tex

864 lines
36 KiB
TeX
Raw Normal View History

2016-07-11 17:42:34 +02:00
\documentclass[11pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
2016-07-12 14:49:11 +02:00
\usepackage{indentfirst}
\usepackage{enumerate}
2016-07-18 12:50:51 +02:00
\usepackage{cite}
2016-07-19 11:50:11 +02:00
\usepackage{caption}
2016-07-11 17:42:34 +02:00
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
% Custom packages
2016-07-12 14:49:11 +02:00
\usepackage{leftrule_theorems}
2016-07-11 17:42:34 +02:00
\usepackage{my_listings}
2016-07-12 14:49:11 +02:00
\usepackage{my_hyperref}
2016-07-11 17:42:34 +02:00
\usepackage{math}
\usepackage{concurgames}
2016-07-11 17:42:34 +02:00
2016-07-12 14:49:11 +02:00
\newcommand{\qtodo}[1]{\colorbox{orange}{\textcolor{blue}{#1}}}
\newcommand{\todo}[1]{\colorbox{orange}{\qtodo{\textbf{TODO:} #1}}}
\newcommand{\qnote}[1]{\colorbox{Cerulean}{\textcolor{Sepia}{[#1]}}}
2016-08-10 19:45:28 +02:00
\newcommand{\note}[1]{\qnote{\textbf{NOTE:} #1}}
2016-07-12 14:49:11 +02:00
2016-07-18 13:27:55 +02:00
\author{Théophile \textsc{Bastian}, supervised by Glynn \textsc{Winskel}
and Pierre \textsc{Clairambault} \\
\begin{small}Cambridge University\end{small}}
\title{Internship report\\Concurrent games as event structures}
\date{June-July 2016}
2016-07-11 17:42:34 +02:00
\begin{document}
\maketitle
2016-07-12 14:49:11 +02:00
\todo{abstract}
2016-07-18 12:50:51 +02:00
\tableofcontents
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
2016-08-09 21:30:22 +02:00
\paragraph{Game semantics} are a kind of denotational semantics in which a
2016-08-09 19:40:25 +02:00
program's behavior is abstracted as a two-players game, in which Player plays
for the program and Opponent plays for the environment of the program (the
user, the operating system, \ldots). The execution of a program, in this
formalism, is then represented as a succession of moves. For instance, the user
pressing a key on the keyboard would be a move of Opponent, to which Player
could react by triggering the corresponding action (\eg{} adding the
corresponding letter in a text field).
2016-08-08 19:49:31 +02:00
Game semantics emerged mostly with~\cite{hyland2000pcf}
and~\cite{abramsky2000pcf}, independently establishing a fully-abstract model
for PCF using game semantics, while ``classic'' semantics had failed to provide
a fully-abstract, reasonable and satisfying model. But this field mostly gained
2016-08-09 19:40:25 +02:00
in notoriety with the development of techniques to capture imperative
2016-08-08 19:49:31 +02:00
programming languages constructions, among which references
handling~\cite{abramsky1996linearity}, followed by higher-order
references~\cite{abramsky1998references}, allowing to model languages with side
effects; or exception handling~\cite{laird2001exceptions}. Until then, the
field has been deeply explored, providing a wide range of such constructions in
the literature.
A success of game semantics is to provide \emph{compositional} and
2016-08-09 19:40:25 +02:00
\emph{syntax-free} semantics. Syntax-free, because representing a program as a
strategy on a game totally abstracts it from the original syntax of the
programming language, representing only the behavior of a program reacting to
its execution environment, which is often desirable in semantics.
Compositional, because game semantics are usually defined by induction over the
syntax, thus easily composed. For instance, it is worth noting that the
application of one term to another is represented as the \emph{composition} of
the two strategies.
\paragraph{Concurrency in game semantics.} In the continuity of the efforts put
forward to model imperative primitives in game semantics, it was natural to
focus at some point on modelling concurrency. The problem was tackled by
2016-08-09 21:30:22 +02:00
\fname{Laird}~\cite{laird2001game}, introducing game semantics for a \lcalc{}
with a few additions, as well as a \emph{parallel execution} operator and
communication on channels. It is often considered, though, that \fname{Ghica}
and \fname{Murawski}~\cite{ghica2004angelic} laid the first stone by defining
game semantics for a fine-grained concurrent language including parallel
execution of ``threads'' and low-level semaphores.
2016-08-09 19:40:25 +02:00
However, both of these constructions are based on \emph{interleavings}. That
is, they model programs on \emph{tree-like games}, games in which the moves
that a player is allowed to play at a given point is represented as a tree
(\eg, in a state $A$, Player can play the move $x$ by following an edge of the
tree starting from $A$, thus reaching $B$ and allowing Opponent to play a given
set of moves --- the outgoing edges of $B$). The concurrency is then
represented as the \emph{interleaving} of all possible sequences of moves, in
order to reach a game tree in which every possible ``unordered'' (\ie, that is
not enclosed in any kind of synchronisation block, as with semaphores)
combination of moves is a valid path.
However, these approaches often introduce non-determinism in the strategies: if
two moves are available to a player, the model often states that she makes a
non-deterministic uniform choice. Yet, one could reasonably argue that a
program should behave consistently with the environment, which would mean that
the semantics of a program --- even a concurrent one --- should still be
deterministic. This idea was explored outside of the game semantics context,
for instance by~\cite{reynolds1978syntactic}, establishing a type-checking
system to restrict concurrent programs to deterministic ones. Some recent work
makes use of linear logic~\cite{caires2010session} for that purpose as well.
Yet, these techniques do not adapt well to interleavings game semantics.
\paragraph{The purpose of this internship} was to try to take a first step
towards the reunification of those two approaches. For that purpose, my
objective was to give a \emph{deterministic} game semantics to a linear
lambda-calculus enriched with parallel and sequential execution operators, as
well as synchronization on channels. In order to model this, I used the
\emph{event structures} formalism, described later on and introduced
in~\cite{rideau2011concurrent} by S. \fname{Rideau} and G. \fname{Winskel}.
Roughly, event structures represent a strategy as a \emph{partial order} on the
moves, stating that move $x$ can only be played after move $y$, which is more
flexible than tree-like game approaches. Although a full-abstraction result
could not be reached --- but is not so far away ---, we~\qnote{or I?} have
proved the \emph{adequacy} of the operational and denotational semantics, and
have obtained an implementation of the (denotational) game semantics, that is,
code that translates a term of the language into its corresponding strategy.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2016-08-09 21:30:22 +02:00
\section{A linear \lcalc{} with concurrency primitives: \llccs}
The language on which my internship was focused was meant to be simple, easy to
parse and easy to work on both in theory and on the implementation. It should
of course include concurrency primitives. For these reasons, we chose to
consider a variant of CCS~\cite{milner1980ccs} --- a simple standard language
including parallel and sequential execution primitives, as well as
synchronization of processes through \emph{channels} ---, lifted up to the
higher order through a \lcalc. The language was then restricted to a
\emph{linear} one --- that is, each identifier declared must be referred to
exactly once ---, partly to keep the model simple, partly to meet the
determinism requirements.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{An affine variant of CCS}
The variant of CCS we chose to use has two base types:
\emph{processes}~($\proc$) and \emph{channels}~($\chan$). It has two base
processes: $0$ (failure) and $1$ (success), although a process can be
considered ``failed'' without reducing to $1$.
Although the \lcalc{} into which we will plug it is linear, it appeared
unavoidable to use an \emph{affine} CCS --- that is, each identifier may be
used \emph{at most} once instead of \emph{exactly} once --- to ease the
inductions over CCS terms.
\begin{figure}[h]
\begin{minipage}[t]{0.60\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots ::=~&1 & \text{(success)}\\
\vert~&0 & \text{(error)}\\
\vert~&t \parallel u & \text{(parallel)}\\
\vert~&t \cdot u & \text{(sequential)}\\
\vert~&(\nu a) t & \text{(new channel)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots ::=~&\proc & \text{(process)} \\
\vert~&\chan & \text{(channel)}
\end{align*}
\end{minipage}
\caption{CCS terms and types}\label{fig:lccs:def}
\end{figure}
The syntax is pretty straightforward to understand: $0$ and $1$ are base
processes; $\parallel$ executes in parallel its two operands; $\cdot$ executes
sequentially its two operands (or synchronizes on a channel if its left-hand
operand is a channel); $(\nu a)$ creates a new channel $a$ on which two
processes can be synchronized. Here, the ``synchronization'' simply means that
a call to the channel is blocking until its other end has been called as well.
The language is simply typed as in figure~\ref{fig:lccs:typing}, in which
$\Phi$ is an environment composed \emph{only} of channels, reflecting the
affine nature of the language. Note that binary operators split their
environment between their two operands, ensuring that each identifier is used
only once.
\begin{figure}[h]
\begin{align*}
\frac{~}{\Phi \vdash 0:\proc} & (\textit{Ax}_0) &
\frac{~}{\Phi \vdash 1:\proc} & (\textit{Ax}_1) &
\frac{~}{\Phi, t:A \vdash t:A} & (\textit{Ax}) &
\frac{\Gamma, a:\chan, \bar{a}:\chan \vdash P : \proc}
{\Gamma \vdash (\nu a) P : \proc}
& (\nu)
\end{align*}\vspace{-1.5em}\begin{align*}
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \parallel Q : \proc}
& (\parallel) &
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \cdot Q : \proc}
& (\cdot_\proc) &
\frac{\Gamma \vdash P : \proc}
{\Gamma,a:\chan \vdash a \cdot P: \proc}
& (\cdot_\chan)
\end{align*} \vspace{-1.5em}
\caption{CCS typing rules}\label{fig:lccs:typing}
\end{figure}
We also equip this language with operational semantics, in the form of a
labeled transition system (LTS), as described in figure~\ref{fig:lccs:opsem},
where $a$ denotes a channel and $x$ denotes any possible label.
\begin{figure}[h]
\begin{align*}
\frac{~}{a \cdot P \redarrow{a} P} & &
\frac{~}{1 \parallel P \redarrow{\tau} P} & &
\frac{~}{1 \cdot P \redarrow{\tau} P} & &
\frac{~}{(\nu a) 1 \redarrow{\tau} 1} & &
\frac{P \redarrow{a} P'\quad Q \redarrow{\bar{a}} Q'}
{P \parallel Q \redarrow{\tau} P' \parallel Q'}
\end{align*}\begin{align*}
\frac{P \redarrow{x} P'}
{P \parallel Q \redarrow{x} P' \parallel Q} & &
\frac{Q \redarrow{x} Q'}
{P \parallel Q \redarrow{x} P \parallel Q'} & &
\frac{P \redarrow{x} P'}
{P \cdot Q \redarrow{x} P' \cdot Q} & &
\frac{P \redarrow{x} P'}
{(\nu a)P \redarrow{x} (\nu a)P'} & (a \neq x)
\end{align*}
\caption{CCS operational semantics}\label{fig:lccs:opsem}
\end{figure}
We consider that a term $P$ \emph{converges} whenever $P \redarrow{\tau}^\ast
1$, and we write $P \Downarrow$.
2016-08-09 21:30:22 +02:00
\subsection{Lifting to the higher order: linear \lcalc}
In order to reach the studied language, \llccs, we have to lift up \lccs{} to a
\lcalc. To do so, we add to the language the constructions of
figure~\ref{fig:llam:syntax}, which are basically the usual \lcalc{}
constructions slightly transformed to be linear (which is mostly reflected by
the typing rules). In particular, there is no base type $\alpha$: the base
types are only $\proc$ and $\chan$.
\begin{figure}[h]
\begin{minipage}[t]{0.55\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots ::=~&x \in \mathbb{V} & \text{(variable)}\\
\vert~&t~u & \text{(application)}\\
\vert~&\lambda x^A \cdot t & \text{(abstraction)}\\
\vert~&t \tens u & \text{(tensor)} \\
\vert~&\text{let }x,y=z\text{~in }t & \text{(tensor elimination)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.40\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots ::=~&A \linarrow B & \text{(linear arrow)}\\
\vert~&A \Tens B & \text{(tensor)}
\end{align*}
\end{minipage}
\caption{Linear \lcalc{} terms and types}\label{fig:llam:syntax}
\end{figure}
To enforce the linearity, the only the typing rules of the usual \lcalc{} that
have to be changed are those of figure~\ref{fig:llam:typing}. The usual rules
for $\lambda$-abstraction and $\otimes$-elimination applies.
\begin{figure}[h]
\begin{align*}
\frac{~}{x : A \vdash x : A} & (\textit{Ax}) &
\frac{\Gamma \vdash t : A \linarrow B \quad \Delta \vdash u : A}
{\Gamma,\Delta \vdash t~u : B} & (\textit{App}) &
\frac{\Gamma \vdash t : A \quad \Delta \vdash u : B}
{\Gamma, \Delta \vdash t \tens u : A \Tens B} & (\tens_I)
\end{align*}
\caption{Linear \lcalc{} typing rules (eluding the usual
ones)}\label{fig:llam:typing}
\end{figure}
The linearity is here guaranteed: in the (\textit{Ax}) rule, the environment
must be $x:A$ instead of the usual $\Gamma, x:A$, ensuring that each variable
is used \emph{at least once}; while the environment split in the binary
operators' rules ensures that each variable is used \emph{at most once}
(implicitly, $\Gamma \cap \Delta = \emptyset$).
To lift the operational semantics to \llccs, we only need to add one rule:
\[
\frac{P \longrightarrow_\beta P'}{P \redarrow{\tau} P'}
\]
%%%%%%%%%%%%%%%%%%%%%
\subsection{Examples}
2016-07-12 14:49:11 +02:00
\todo{Examples}
2016-07-12 14:49:11 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{A games model}
2016-07-12 14:49:11 +02:00
Our goal is now to give a games model for the above language. For that purpose,
we will use \emph{event structures}, providing an alternative formalism to
the often-used tree-like games.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The event structures framework}
2016-07-20 00:26:19 +02:00
2016-07-21 19:10:02 +02:00
The traditional approach to concurrent games is to represent them as
\emph{tree-like games}. If the considered game consists in three moves, namely
$A$, $B$ and $C$, where $A$ can be played by Opponent and the others by Player
\emph{after} Opponent has played $A$, that means that the states of the game
will be $\epsilon$, $A$, $A \cdot B$ and $A \cdot C$, which corresponds to the
game tree
\[
\begin{tikzpicture}
\node (1) [ellipse] {A} ;
\node (2) [below left of=1, ellipse] {B};
\node (3) [below right of=1, ellipse] {C};
\path [->]
(1) edge (2)
edge (3);
\end{tikzpicture}
\]
This can of course be used to describe much larger games, and is often useful
to reason concurrently, since the causal histories appear clearly: the possible
states of the game can be read easily by concatenating the events found along
a path from the root of the tree.
2016-07-21 19:10:02 +02:00
But it also has the major drawback of growing exponentially in size: let us
consider a game in which Opponent must play $A$ and $B$ in no particular order
before Player can play $C$. The corresponding tree-like game would be
\[
\begin{tikzpicture}
\node (11) {$A_1$};
\node (21) [below of=11] {$B_1$};
\node (31) [below of=21] {$C_1$};
\node (22) [right of=11] {$B_2$};
\node (12) [below of=22] {$A_2$};
\node (32) [below of=12] {$C_2$};
\path [->]
(11) edge (21)
(21) edge (31)
(22) edge (12)
(12) edge (32);
\end{tikzpicture}
\]
This problem motivated the introduction of \emph{event structures} as a
formalism to describe such games~\cite{rideau2011concurrent}. Informally, an
event structure is a partial order $\leq$ on \emph{events} (here, the game's
moves), alongside with a \emph{consistency} relation.
The purpose of the consistency relation is to describe non-determinism, in
which we are not interested here, since we seek a deterministic model: in all
the following constructions, I will omit the consistency set. The original
constructions including it can be found for instance
in~\cite{castellan2016concurrent,winskel1986event}.
2016-07-21 19:10:02 +02:00
The partial order $e_1 \leq e_2$ means that $e_1$ must have been played before
$e_2$ can be played. For instance, the Hasse diagram of the previous game would
look like
2016-07-21 19:10:02 +02:00
\[
\begin{tikzpicture}
\node (1) {A};
\node (2) [right of=1] {B};
\node (3) [below left of=1, below right of=2] {C};
\path[->]
(1) edge (3)
(2) edge (3);
\end{tikzpicture}
\]
%%%%%
\subsubsection{Event structures}
2016-07-12 14:49:11 +02:00
\begin{definition}[event structure]
An \emph{event structure}~\cite{winskel1986event} is a poset $(E, \leq_E)$,
where $E$ is a set of \emph{events} and $\leq_E$ is a partial order on $E$.
2016-07-12 14:49:11 +02:00
The partial order $\leq_E$ naturally induces a binary relation $\edgeArrow$
over $E$ that is defined as the transitive reduction of $\leq_E$.
\end{definition}
In this context, the right intuition of event structures is a set of events
that can occur, the players' moves, alongside with a partial order stating that
a given move cannot occur before another move.
2016-07-20 00:26:19 +02:00
2016-07-12 14:49:11 +02:00
Event structures are often represented as a directed acyclic graph (DAG)
where the vertices are the elements of $E$ and the edges are the transitive
reduction of $\leq_E$.
2016-07-12 14:49:11 +02:00
\begin{definition}[event structure with polarities]
An \emph{event structure with polarities} (\textit{ESP}) is an event
2016-07-20 00:26:19 +02:00
structure $(E, \leq_E, \rho)$, where $\rho : E \to \set{+,-}$ is a
2016-07-12 14:49:11 +02:00
function associating a \emph{polarity} to each event.
\end{definition}
In order to model games, this is used to represent whether a move is to be
played by Player or Opponent. To represent polarities, we will often use colors
instead of $+$ and $-$ signs: a red-circled event will have a negative
polarity, \ie{} will be played by Opponent, while a green-circled one will have
a positive polarity.
The ESP of the previous example would then be
\[
\begin{tikzpicture}
\node (1) [draw=red,ellipse] {A};
\node (3) [draw=green,ellipse,right of=1] {C};
\node (2) [draw=red,ellipse,right of=3] {B};
\path[->]
(1) edge (3)
(2) edge (3);
\end{tikzpicture}
\]
2016-07-12 14:49:11 +02:00
\begin{definition}[configuration]
A \emph{configuration} of an ESP $A$ is a finite subset $X \subseteq A$
that is \emph{down-closed}, \ie{}
\vspace{-0.5em}
\[ {\forall x \in X}, {\forall e \in A}, {e \leq_A x} \implies {e \in X}.\]
2016-07-12 14:49:11 +02:00
$\config(A)$ is the set of configurations of $A$.
\end{definition}
2016-07-19 11:50:11 +02:00
A configuration can thus be seen as a valid state of the game. $\config(A)$
plays a major role in definitions and proofs on games and strategies.
2016-07-12 14:49:11 +02:00
\begin{notation}
For $x,y \in \config(A)$, $x \forkover{e} y$ states that $y = x \sqcup
2016-07-12 14:49:11 +02:00
\set{e}$ (and that both are valid configurations). It is also possible to
write $x \forkover{e}$, stating that $x \sqcup \set{e} \in \config(A)$, or
$x \fork y$.
2016-07-12 14:49:11 +02:00
\end{notation}
%%%%%
\subsubsection{Concurrent games}
2016-07-12 14:49:11 +02:00
\begin{definition}[game]
A \emph{game} $A$ is an event structure with polarities. \\
The dual game $A^\perp$ is the game $A$ where all the polarities in
$\rho$ have been reversed.
\end{definition}
2016-07-19 11:50:11 +02:00
For instance, one could imagine a game modeling the user interface of a coffee
machine: Player is the coffee machine, while Opponent is a user coming to buy a
drink.
\begin{example}[Coffee machine]
Here, the game has only events, but no edges: nothing in the rules of the
2016-07-19 11:50:11 +02:00
game constrains the program to behave in a certain way, only its
\emph{strategy} will do that.
\smallskip
\includedot[scale=0.9]{coffeemachine.game}
\captionof{figure}{Coffee machine game}
\medskip
2016-07-19 11:50:11 +02:00
The user can insert a coin, ask for a coffee or ask for a tea. The coffee
machine can deliver a coffee or deliver a tea.
\end{example}
2016-07-12 14:49:11 +02:00
\begin{definition}[pre-strategy]
A \emph{pre-strategy} on the game $A$, $\sigma: A$, is an ESP such that
2016-07-12 14:49:11 +02:00
\begin{enumerate}[(i)]
\item $\sigma \subseteq A$;
\item $\config(\sigma) \subseteq \config(A)$;
% \item \textit{(local injectivity)} $\forall s,s' \in \config(S),
% \sigma(s) = \sigma(s') \implies s = s'$;
\item $\forall s \in \sigma, \rho_A(s) = \rho_\sigma(s)$
\item \qnote{Local injectivity: useless without conflicts?}
2016-07-12 14:49:11 +02:00
\end{enumerate}
\end{definition}
2016-07-19 11:50:11 +02:00
\begin{example}[Coffee machine, cont.]
A possible \emph{pre-strategy} for our coffee machine example would be
2016-07-19 11:50:11 +02:00
\smallskip
\begin{centering}
\includedot{coffeemachine.strat}
\captionof{figure}{Coffee machine strategy}
\end{centering}
This pre-strategy makes sense: the coffee machine software waits for the
user to both put a coin and press ``coffee'' before delivering a coffee,
and same goes for tea. Though, what happens if the user inserts a coin and
presses \emph{both} buttons at the same time? Here, the coffee machine can
dispense both drinks. This behavior is surely unwanted, but cannot be
easily fixed using this representation of the coffee machine --- this would
have been the role of the \emph{consistency set} we briefly mentioned.
\end{example}
But as it is defined, a pre-strategy does not exactly capture what we expect of
a \emph{strategy}: it is too expressive. For instance, a pre-strategy on the
coffee machine game would be allowed to have the (absurd) relation
$\text{coffee} \leq \text{getTea}$, stating that the user cannot press the
``tea'' button before the machine has delivered a coffee. This is unrealistic:
the coffee machine has no mean to enforce this. We then have to restrict
pre-strategies to \emph{strategies}:
2016-07-19 11:50:11 +02:00
2016-07-12 14:49:11 +02:00
\begin{definition}[strategy]
A \emph{strategy} is a pre-strategy $\sigma : A$ that
2016-07-12 14:49:11 +02:00
``behaves well'', \ie{} that is
\begin{enumerate}[(i)]
\item\label{def:receptive}
\textit{receptive}: for all $x \in \config(A)$ \st{}
$x \forkover{e^-}$, $e \in \sigma$; \qnote{equivalent to
$\rho^{-1}(\ominus) \subseteq \sigma$ in a deterministic context?}
2016-07-12 14:49:11 +02:00
\item\label{def:courteous}
\textit{courteous}: $\forall x \edgeArrow_\sigma x' \in \sigma$,
$(\rho(x),\rho(x')) \neq (-,+) \implies
x \edgeArrow_A x'$.
2016-07-12 14:49:11 +02:00
\end{enumerate}
\end{definition}
(\ref{def:receptive}) captures the idea that we should not force Opponent not
to play one of its moves. Indeed, not including an event in a strategy means
that this event \emph{will not} be played. It is unreasonable to consider that
a strategy could forbid Opponent to play a given move.
(\ref{def:courteous}) states that unless a dependency relation is imposed by
the games' rules, one can only make one of its moves depend on an Opponent
move, \ie{} every direct arrow in the partial order that is not inherited from
the game should be ${\ominus \edgeArrow \oplus}$. Clearly, it is unreasonable
to consider an arrow ${\ostar \edgeArrow \ominus}$, which would mean forcing
Opponent to wait for a move (either from Player or Opponent) before player her
move; but ${\oplus \edgeArrow \oplus}$ is also unreasonable, since we're
working in a concurrent context. Intuitively, one could think that when playing
$x$ then $y$, it is undefined whether Opponent will receive $x$ then $y$ or $y$
then $x$.
%%%%%
\subsubsection{Operations on games and strategies}
2016-07-20 00:26:19 +02:00
2016-08-10 19:45:28 +02:00
In order to manipulate strategies and define them by induction over the syntax,
the following operations will be extensively used. It may also be worth noting
that in the original formalism~\cite{castellan2016concurrent}, games,
strategies and maps between them form a bicategory in which these operations
play special roles.
2016-07-20 00:26:19 +02:00
2016-08-10 19:45:28 +02:00
In this whole section, unless stated otherwise, $E$ and $F$ denotes ESPs, $A$,
$B$ and $C$ denotes games, $\sigma: A$ and $\tau: B$ denotes strategies.
2016-07-20 00:26:19 +02:00
\begin{definition}[Parallel composition]
The \emph{parallel composition} $E \parallel F$ of two ESPs is an ESP
2016-08-10 19:45:28 +02:00
whose events are $\left(\set{0} \times E\right) \sqcup \left(\set{1} \times
2016-07-20 00:26:19 +02:00
F\right)$ (the disjoint tagged union of the events of $E$ and $F$), and
whose partial order is $\leq_E$ on $E$ and $\leq_F$ on $F$, with no
relation between elements of $E$ and $F$.
One can then naturally expand this definition to games (by preserving
polarities) and to strategies.
\end{definition}
2016-08-10 19:45:28 +02:00
Given two strategies on dual games $A$ and $A^\perp$, it is interesting to
compute their \emph{interaction}, that is, ``what will happen if one strategy
plays against the other''.
\note{Are the following names clear enough?}
\begin{definition}[Interaction]
Given two strategies $\sigma : A$ and $\tau : A^\perp$, their
\emph{interaction} $\sigma \wedge \tau$ is the ESP
$\sigma \cup \tau \subseteq A$ from which causal loops has been removed.
More precisely, $\sigma \cup \tau$ is a set adjoined with a \emph{preorder}
($\leq_\sigma \cup \leq_\tau$) that may not respect antisymmetry, that is,
may have causal loops. $\sigma \wedge \tau$ is then obtained by removing
all the elements contained in such loops from $\sigma \cup \tau$.
\end{definition}
\textit{Note: this can be interpreted as a pullback in the category mentioned
above.\\
This construction, even though it is equivalent to the construction
of~\cite{castellan2016concurrent} when considering deterministic strategies, is
no longer valid when adding a consistency set.}
\begin{definition}[Compositional interaction]
Given two strategies $\sigma : A^\perp \parallel B$ and $\tau : B^\perp
\parallel C$, their \emph{compositional interaction} $\tau \strInteract
\sigma$ is defined as $(\sigma \parallel \id_C) \wedge (\id_A \parallel
\tau)$. \qtodo{Tell me more?}
\end{definition}
\begin{definition}[Strategies composition]
\end{definition}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Interpretation of \llccs}
2016-07-18 12:50:51 +02:00
%%%%%%%%%%%%%%%%%%%%%
\subsection{Adequacy}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2016-07-18 12:50:51 +02:00
\section{Implementation of deterministic concurrent games}
2016-07-18 13:15:09 +02:00
\hfill\href{https://github.com/tobast/cam-strategies/}
{\includegraphics[height=2em]{github32.png}~\raisebox{0.5em}{Github
repository}}
2016-07-18 12:50:51 +02:00
\vspace{1em}
The first part of my internship mostly consisted --- apart from understanding
the bibliography and the underlying concepts --- in the implementation of
operations on \emph{deterministic} concurrent games, that is, concurrent games
as event structures without conflicts. The work had to be done from scratch, as
no one implemented this before.
This implementation aims to provide
\begin{enumerate}[(i)]
\item a --- more or less --- convenient way to input games/strategies;
\item basic operations over those games and strategies: parallel
composition, pullback, interaction, composition, copycat, \ldots;
\item a clean display as a Dot graph.
\end{enumerate}
\subsection{Structures}
The implementation aims to stay as close as possible to the mathematical model,
while still providing quite efficient operations.
2016-07-18 14:18:58 +02:00
As we do not handle non-determinism, an event structure can be easily
represented as a DAG in memory. The actual representation that was chosen is a
set of nodes, each containing (as well as a few other information) a list of
incoming and outgoing edges.
A \emph{game} is, in the literature, a simple ESP\@. However, to provide
interaction and composition operations, we have to somehow keep track of the
parallel compositions that were used to reach this game: if the user wants to
compose strategies on $A \strParallel B$ and $B \strParallel C$, we have to
remember that those games were indeed parallel compositions of the right games,
2016-07-18 15:02:32 +02:00
and not just a set where the events from, \eg, $A$ and $B$ are mixed. \\
This information is kept in a tree, whose leaves are the base games that were
put in parallel, and whose nodes represent a parallel composition operation.
Finally, a \emph{strategy} consists in a game and an ESP (the strategy itself),
plus a map from the nodes of the strategy to the nodes of the game. This
structure is really close to the mathematical definition of a strategy, and yet
is only a lesser loss in efficiency.
\subsection{Operations}
The usual operations on games and strategies, namely \emph{parallel
composition}, \emph{pullback}, \emph{interaction} and \emph{composition} are
implemented in a very modular way: each operation is implemented in a functor,
whose arguments are the other operations it makes use of, each coming with its
signature. Thus, one can simply \lstocaml{open Operations.Canonical} to use the
canonical implementation, or define its own implementation, build it into an
\lstocaml{Operations} module (which has only a few lines of code) and then
open it. This is totally transparent to the user, who can use the same infix
operators.
\subsubsection{Parallel composition}
While the usual construction (\cite{castellan2016concurrent}) involves defining
the events of $A \strParallel B$ as ${\set{0} \times A} \cup {\set{1}
\times B}$, the parallel composition of two strategies is here simply
represented as the union of both event structures, while altering the
composition tree.
\subsubsection{Pullback}
Given two strategies on the same game, the pullback operation attempts to
extract a ``common part'' of those two strategies. Intuitively, the pullback of
two strategies is ``what happens'' if those two strategies play together.
The approach that was implemented (and that is used as
\lstocaml{Pullback.Canonical}) is a \emph{bottom-up} approach: iteratively, the
algorithm looks for an event that has no dependencies in both strategies, adds
it and removes the satisfied dependencies.\\
One could also imagine a \emph{top-bottom} approach, where the algorithm starts
working on the merged events of both strategies, then looks for causal loops
and removes every event involved.
\subsubsection{Interaction}
Once the previous operations are implemented, \emph{interaction} is easily
defined as in the literature (\cite{castellan2016concurrent}) and nearly is a
one-liner.
\subsubsection{Composition}
Composition is also quite easy to implement, given the previous operations. The
only difficulty is that hiding the central part means computing the new
$\edgeArrow$ relation (that is the transitive reduction of $\leq$), which means
computing the transitive closure of the interaction, hiding the central part
and then computing the transitive reduction of the DAG\@.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear lambda-calculus}
Concurrent games can be used as a model of lambda-calculus. To keep the
strategies finite and to avoid non-determinism, and to have a somehow easier
approach, one can use concurrent games as a model of \emph{linear}
lambda-calculus, that is, a variant of the simply-typed lambda-calculus where
each variable in the environment can and must be used exactly once.
2016-07-20 15:18:33 +02:00
\subsection{Definition}
The linear lambda calculus we use has the same syntax as the usual simply-typed
lambda calculus with type annotations and tensor product:
2016-07-21 12:30:17 +02:00
\medskip
Only the following typing rules differ from the usual rules and are worth
noting:
Note that in~(\ref{typ:llam:ax}), the left part is $x : A$ and not (as usual)
$\Gamma, x:A$. This ensures that each defined variable present in the
environment will be used. The implicit condition $\Gamma \cap \Delta =
\emptyset$ in~(\ref{typ:llam:app}) ensures that each defined variable will be
used at most once.
The terms can then be interpreted as strategies through the $\seman{\cdot}$
operator defined as in figure~\ref{fig:llam:interp}. The $\ominus$ stands for a
game whose only event is negative. The interpretation operator maps a type to a
game and a term to a strategy playing on the game associated to its type, put
in parallel with its environment's dual. For instance, if $x:A \vdash t:B$, the
strategy $\seman{t}$ will play on $\seman{A}^\perp \parallel \seman{B}$.
This explains the definition of $\seman{\lambda x^A \cdot t}$: $\seman{t}$
plays on $\seman{A}^\perp \parallel \seman{B}$, same as $\seman{\lambda x^A
\cdot t}$.
\begin{figure}
\begin{minipage}[t]{0.45\textwidth} \begin{align*}
\seman{x^A} &\eqdef \cc_{\seman{A}} \\
\seman{t^{A \linarrow B}~u^{A}} &\eqdef
\cc_{A \linarrow B} \strComp \left( \seman{t} \parallel \seman{u}
\right) \\
\seman{\lambda x^A \cdot t} &\eqdef \seman{t} \\
\seman{t \tens u} &\eqdef \seman{t} \parallel \seman{u}
\end{align*} \end{minipage} \hfill
\begin{minipage}[t]{0.45\textwidth} \begin{align*}
\seman{\alpha} &\eqdef \ominus \\
\seman{A \linarrow B} &\eqdef \seman{A}^\perp \parallel \seman{B} \\
\seman{A \Tens B} &\eqdef \seman{A} \parallel \seman{B}
\end{align*}\end{minipage}
\caption{Interpretation of linear lambda calculus}\label{fig:llam:interp}
\end{figure}
\subsection{Implementation}
The implementation, which was supposed to be fairly simple, turned out to be
2016-07-21 12:30:17 +02:00
not as straightforward as expected due to technical details: while, in the
theory, the parallel composition is obviously associative and commutative (up
to isomorphism), and thus used as such when dealing with environment and typing
rules, things get a bit harder in practice when one is supposed to provide the
exact strategy.
For instance, the above rule~(\ref{typ:llam:app}) states that the resulting
environment is $\Gamma,\Delta$, while doing so in the actual implementation
(that is, simply considering $\seman{\Gamma} \strParallel \seman{\Delta}$)
turns out to be a nightmare: it is better to keep the environment ordered by
the variables introduction order, thus intertwining $\Gamma$ and $\Delta$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear \lccs}
2016-07-21 12:30:17 +02:00
\begin{figure}[h]
\begin{align*}
2016-07-21 13:30:56 +02:00
\seman{P \parallel Q} &\eqdef \left(
\begin{tikzpicture}[baseline, scale=0.8]
2016-07-21 13:42:17 +02:00
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$};
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$};
2016-07-21 13:30:56 +02:00
\node (0) at (5,0.65) [draw=red,ellipse] {call};
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
\path[->]
(0) edge (1)
edge [bend right] (2)
edge [bend right] (4)
(2) edge (3)
(4) edge (5)
(3) edge [bend right] (1)
(5) edge [bend right] (1);
\end{tikzpicture}
\right) \strComp \left(\seman{P} \parallel \seman{Q}\right) &
2016-07-21 12:30:17 +02:00
\seman{\proc} = \seman{\chan} &\eqdef \begin{tikzpicture}[baseline]
\node (1) at (0,0.5) [draw=red,ellipse] {call};
\node (2) at (0,-0.5) [draw=green,ellipse] {done};
\draw [->] (1) -- (2);
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
\seman{P \cdot Q} &\eqdef \left(
\begin{tikzpicture}[baseline,scale=0.8]
2016-07-21 13:42:17 +02:00
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$};
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$};
2016-07-21 13:30:56 +02:00
\node (0) at (5,0.65) [draw=red,ellipse] {call};
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
\path[->]
(0) edge (1)
edge [bend right] (4)
(2) edge (3)
(4) edge (5)
(3) edge [bend right] (1)
(5) edge (2);
2016-07-21 12:30:17 +02:00
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\right) \strComp \left(\seman{P} \parallel \seman{Q}\right) &
2016-07-21 12:30:17 +02:00
\seman{1} &\eqdef \begin{tikzpicture}[baseline]
\node (1) at (0,0.5) [draw=red,ellipse] {call};
\node (2) at (0,-0.5) [draw=green,ellipse] {done};
\draw [->] (1) -- (2);
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
\seman{(a : \chan) \cdot P} &\eqdef \left(
\begin{tikzpicture}[baseline,scale=0.8]
2016-07-21 13:42:17 +02:00
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $a$};
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $a$};
2016-07-21 13:30:56 +02:00
\node (0) at (5,0.65) [draw=red,ellipse] {call};
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
2016-07-21 12:30:17 +02:00
\path[->]
(0) edge (1)
edge [bend right] (2)
(2) edge (3)
(4) edge (5)
2016-07-21 13:30:56 +02:00
(3) edge (4)
2016-07-21 12:30:17 +02:00
(5) edge [bend right] (1);
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\right) \strComp \left(\seman{P} \parallel \seman{a}\right) &
\seman{0} &\eqdef \begin{tikzpicture}[baseline]
\node (1) at (0,0.2) [draw=red,ellipse] {call};
\end{tikzpicture}
2016-07-21 13:42:17 +02:00
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
\seman{(\nu a) P} &\eqdef \left(
\begin{tikzpicture}[baseline,scale=0.8]
\node (6) at (0,0.65) [draw=green,ellipse] {call $a$};
\node (7) at (0,-0.65) [draw=red,ellipse] {done $a$};
\node (4) at (2.5,0.65) [draw=green,ellipse] {call $\bar{a}$};
\node (5) at (2.5,-0.65) [draw=red,ellipse] {done $\bar{a}$};
\node (2) at (5,0.65) [draw=green,ellipse] {call $P$};
\node (3) at (5,-0.65) [draw=red,ellipse] {done $P$};
\node (0) at (7.5,0.65) [draw=red,ellipse] {call};
\node (1) at (7.5,-0.65) [draw=green,ellipse] {done};
\path[->]
(0) edge (1)
edge [bend right] (2)
(2) edge (3)
(3) edge [bend right] (1)
(4) edge (5)
edge (7)
(6) edge (7)
edge (5);
\end{tikzpicture}
\right) \strComp \seman{P} &
2016-07-21 12:30:17 +02:00
\end{align*}
\caption{CCS interpretation as strategies}\label{fig:lccs:interp}
\end{figure}
2016-07-18 12:50:51 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliography{biblio}
2016-08-08 19:49:31 +02:00
\bibliographystyle{ieeetr}
2016-07-12 14:49:11 +02:00
2016-07-11 17:42:34 +02:00
\end{document}