2016-07-11 17:42:34 +02:00
|
|
|
\documentclass[11pt,a4paper]{article}
|
|
|
|
\usepackage[utf8]{inputenc}
|
|
|
|
\usepackage[T1]{fontenc}
|
|
|
|
\usepackage{amsmath}
|
|
|
|
\usepackage{amsfonts}
|
|
|
|
\usepackage{amssymb}
|
|
|
|
\usepackage{graphicx}
|
2016-07-12 14:49:11 +02:00
|
|
|
\usepackage{indentfirst}
|
|
|
|
\usepackage{enumerate}
|
2016-07-18 12:50:51 +02:00
|
|
|
\usepackage{cite}
|
2016-07-19 11:50:11 +02:00
|
|
|
\usepackage{caption}
|
2016-07-11 17:42:34 +02:00
|
|
|
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
|
|
|
|
|
2016-08-22 10:50:23 +02:00
|
|
|
\usepackage{todo}
|
|
|
|
|
2016-07-11 17:42:34 +02:00
|
|
|
% Custom packages
|
2016-07-12 14:49:11 +02:00
|
|
|
\usepackage{leftrule_theorems}
|
2016-07-11 17:42:34 +02:00
|
|
|
\usepackage{my_listings}
|
2016-07-12 14:49:11 +02:00
|
|
|
\usepackage{my_hyperref}
|
2016-07-11 17:42:34 +02:00
|
|
|
\usepackage{math}
|
2016-07-18 17:27:34 +02:00
|
|
|
\usepackage{concurgames}
|
2016-07-11 17:42:34 +02:00
|
|
|
|
2016-08-22 10:51:36 +02:00
|
|
|
%\newcommand{\qtodo}[1]{\colorbox{orange}{\textcolor{blue}{#1}}}
|
|
|
|
%\newcommand{\todo}[1]{\colorbox{orange}{\qtodo{\textbf{TODO:} #1}}}
|
|
|
|
%\newcommand{\qnote}[1]{\colorbox{Cerulean}{\textcolor{Sepia}{[#1]}}}
|
|
|
|
%\newcommand{\note}[1]{\qnote{\textbf{NOTE:} #1}}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-07-18 13:27:55 +02:00
|
|
|
\author{Théophile \textsc{Bastian}, supervised by Glynn \textsc{Winskel}
|
|
|
|
and Pierre \textsc{Clairambault} \\
|
|
|
|
\begin{small}Cambridge University\end{small}}
|
|
|
|
\title{Internship report\\Concurrent games as event structures}
|
|
|
|
\date{June-July 2016}
|
2016-07-11 17:42:34 +02:00
|
|
|
|
|
|
|
\begin{document}
|
|
|
|
\maketitle
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\begin{abstract}
|
|
|
|
During my internship, I have worked on a \emph{deterministic}
|
|
|
|
game semantics model for a minimalistic \emph{concurrent} language,
|
|
|
|
described using the \emph{games as event structures} formalism. I have
|
|
|
|
proved the \emph{adequacy} of this model with the operational semantics of
|
|
|
|
the language modelled; and I have implemented this model, allowing one to
|
|
|
|
input an expression of the language and getting its representation
|
|
|
|
as Dot graph. This implementation also supports basic operations on event
|
|
|
|
structures, which could be useful to other people working in this domain.
|
|
|
|
\end{abstract}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-07-18 12:50:51 +02:00
|
|
|
\tableofcontents
|
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-08-08 14:44:07 +02:00
|
|
|
\section{Introduction}
|
|
|
|
|
2016-08-09 21:30:22 +02:00
|
|
|
\paragraph{Game semantics} are a kind of denotational semantics in which a
|
2016-08-09 19:40:25 +02:00
|
|
|
program's behavior is abstracted as a two-players game, in which Player plays
|
|
|
|
for the program and Opponent plays for the environment of the program (the
|
|
|
|
user, the operating system, \ldots). The execution of a program, in this
|
|
|
|
formalism, is then represented as a succession of moves. For instance, the user
|
|
|
|
pressing a key on the keyboard would be a move of Opponent, to which Player
|
|
|
|
could react by triggering the corresponding action (\eg{} adding the
|
|
|
|
corresponding letter in a text field).
|
2016-08-08 19:49:31 +02:00
|
|
|
|
|
|
|
Game semantics emerged mostly with~\cite{hyland2000pcf}
|
|
|
|
and~\cite{abramsky2000pcf}, independently establishing a fully-abstract model
|
|
|
|
for PCF using game semantics, while ``classic'' semantics had failed to provide
|
|
|
|
a fully-abstract, reasonable and satisfying model. But this field mostly gained
|
2016-08-09 19:40:25 +02:00
|
|
|
in notoriety with the development of techniques to capture imperative
|
2016-08-08 19:49:31 +02:00
|
|
|
programming languages constructions, among which references
|
|
|
|
handling~\cite{abramsky1996linearity}, followed by higher-order
|
|
|
|
references~\cite{abramsky1998references}, allowing to model languages with side
|
2016-08-12 17:29:29 +02:00
|
|
|
effects; or exception handling~\cite{laird2001exceptions}. Since then, the
|
2016-08-08 19:49:31 +02:00
|
|
|
field has been deeply explored, providing a wide range of such constructions in
|
|
|
|
the literature.
|
|
|
|
|
|
|
|
A success of game semantics is to provide \emph{compositional} and
|
2016-08-09 19:40:25 +02:00
|
|
|
\emph{syntax-free} semantics. Syntax-free, because representing a program as a
|
|
|
|
strategy on a game totally abstracts it from the original syntax of the
|
|
|
|
programming language, representing only the behavior of a program reacting to
|
|
|
|
its execution environment, which is often desirable in semantics.
|
|
|
|
Compositional, because game semantics are usually defined by induction over the
|
|
|
|
syntax, thus easily composed. For instance, it is worth noting that the
|
|
|
|
application of one term to another is represented as the \emph{composition} of
|
|
|
|
the two strategies.
|
|
|
|
|
|
|
|
\paragraph{Concurrency in game semantics.} In the continuity of the efforts put
|
|
|
|
forward to model imperative primitives in game semantics, it was natural to
|
|
|
|
focus at some point on modelling concurrency. The problem was tackled by
|
2016-08-09 21:30:22 +02:00
|
|
|
\fname{Laird}~\cite{laird2001game}, introducing game semantics for a \lcalc{}
|
|
|
|
with a few additions, as well as a \emph{parallel execution} operator and
|
|
|
|
communication on channels. It is often considered, though, that \fname{Ghica}
|
2016-08-12 17:29:29 +02:00
|
|
|
and \fname{Murawski}~\cite{ghica2004angelic} really took the fundamental step
|
|
|
|
by defining game semantics for a fine-grained concurrent language including
|
|
|
|
parallel execution of ``threads'' and low-level semaphores --- a way more
|
|
|
|
realistic approach to the problem.
|
2016-08-09 19:40:25 +02:00
|
|
|
|
|
|
|
However, both of these constructions are based on \emph{interleavings}. That
|
|
|
|
is, they model programs on \emph{tree-like games}, games in which the moves
|
2016-08-12 17:29:29 +02:00
|
|
|
that a player is allowed to play at a given point are represented as a tree
|
2016-08-09 19:40:25 +02:00
|
|
|
(\eg, in a state $A$, Player can play the move $x$ by following an edge of the
|
|
|
|
tree starting from $A$, thus reaching $B$ and allowing Opponent to play a given
|
|
|
|
set of moves --- the outgoing edges of $B$). The concurrency is then
|
|
|
|
represented as the \emph{interleaving} of all possible sequences of moves, in
|
|
|
|
order to reach a game tree in which every possible ``unordered'' (\ie, that is
|
|
|
|
not enclosed in any kind of synchronisation block, as with semaphores)
|
|
|
|
combination of moves is a valid path.
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
However, this approach introduces non-determinism in the strategies: if two
|
|
|
|
moves are available to a player, the model states that they make a
|
2016-08-09 19:40:25 +02:00
|
|
|
non-deterministic uniform choice. Yet, one could reasonably argue that a
|
|
|
|
program should behave consistently with the environment, which would mean that
|
|
|
|
the semantics of a program --- even a concurrent one --- should still be
|
|
|
|
deterministic. This idea was explored outside of the game semantics context,
|
|
|
|
for instance by~\cite{reynolds1978syntactic}, establishing a type-checking
|
|
|
|
system to restrict concurrent programs to deterministic ones. Some recent work
|
2016-08-12 17:29:29 +02:00
|
|
|
makes use of linear logic~\cite{caires2010session} for similar purposes as
|
|
|
|
well. Yet, the interleavings game semantics of these languages remains
|
|
|
|
non-deterministic.
|
2016-08-09 19:40:25 +02:00
|
|
|
|
|
|
|
\paragraph{The purpose of this internship} was to try to take a first step
|
2016-08-12 17:29:29 +02:00
|
|
|
towards the reunification of those two developments. For that purpose, my
|
2016-08-09 19:40:25 +02:00
|
|
|
objective was to give a \emph{deterministic} game semantics to a linear
|
|
|
|
lambda-calculus enriched with parallel and sequential execution operators, as
|
2016-08-12 17:29:29 +02:00
|
|
|
well as synchronization on channels. In order to model this, I used the games
|
|
|
|
as \emph{event structures} formalism, described later on and introduced
|
2016-08-09 19:40:25 +02:00
|
|
|
in~\cite{rideau2011concurrent} by S. \fname{Rideau} and G. \fname{Winskel}.
|
|
|
|
Roughly, event structures represent a strategy as a \emph{partial order} on the
|
|
|
|
moves, stating that move $x$ can only be played after move $y$, which is more
|
|
|
|
flexible than tree-like game approaches. Although a full-abstraction result
|
2016-08-12 17:29:29 +02:00
|
|
|
could not be reached --- but is not so far away ---, I have proved the
|
|
|
|
\emph{adequacy} of the operational and denotational semantics, and have
|
|
|
|
obtained an implementation of the (denotational) game semantics, that is, code
|
|
|
|
that translates a term of the language into its corresponding strategy.
|
2016-08-08 14:44:07 +02:00
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-08-09 21:30:22 +02:00
|
|
|
\section{A linear \lcalc{} with concurrency primitives: \llccs}
|
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
The language on which my internship was focused was meant to be simple, easy to
|
|
|
|
parse and easy to work on both in theory and on the implementation. It should
|
|
|
|
of course include concurrency primitives. For these reasons, we chose to
|
|
|
|
consider a variant of CCS~\cite{milner1980ccs} --- a simple standard language
|
|
|
|
including parallel and sequential execution primitives, as well as
|
|
|
|
synchronization of processes through \emph{channels} ---, lifted up to the
|
|
|
|
higher order through a \lcalc. The language was then restricted to a
|
|
|
|
\emph{linear} one --- that is, each identifier declared must be referred to
|
|
|
|
exactly once ---, partly to keep the model simple, partly to meet the
|
|
|
|
determinism requirements.
|
|
|
|
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-08-12 17:29:29 +02:00
|
|
|
\subsection{A linear variant of CCS~: \linccs}
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
The variant of CCS we chose to use has two base types:
|
|
|
|
\emph{processes}~($\proc$) and \emph{channels}~($\chan$). It has two base
|
2016-08-12 17:29:29 +02:00
|
|
|
processes, $0$ (failure) and $1$ (success), although a process can be
|
|
|
|
considered ``failed'' without reducing to $0$ (in case of deadlock).
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
\begin{figure}[h]
|
|
|
|
\begin{minipage}[t]{0.60\textwidth}
|
|
|
|
\begin{center}Terms\end{center}\vspace{-1em}
|
|
|
|
\begin{align*}
|
|
|
|
t,u,\ldots ::=~&1 & \text{(success)}\\
|
|
|
|
\vert~&0 & \text{(error)}\\
|
|
|
|
\vert~&t \parallel u & \text{(parallel)}\\
|
|
|
|
\vert~&t \cdot u & \text{(sequential)}\\
|
|
|
|
\vert~&(\nu a) t & \text{(new channel)}
|
|
|
|
\end{align*}
|
|
|
|
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
|
|
|
|
\begin{center}Types\end{center}\vspace{-1em}
|
|
|
|
\begin{align*}
|
|
|
|
A,B,\ldots ::=~&\proc & \text{(process)} \\
|
|
|
|
\vert~&\chan & \text{(channel)}
|
|
|
|
\end{align*}
|
|
|
|
\end{minipage}
|
2016-08-12 17:29:29 +02:00
|
|
|
\caption{\linccs{} terms and types}\label{fig:lccs:def}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{figure}
|
|
|
|
|
|
|
|
The syntax is pretty straightforward to understand: $0$ and $1$ are base
|
|
|
|
processes; $\parallel$ executes in parallel its two operands; $\cdot$ executes
|
|
|
|
sequentially its two operands (or synchronizes on a channel if its left-hand
|
2016-08-20 20:00:32 +02:00
|
|
|
operand is a channel); $(\nu a)$ creates two new channels, $a$ and $\bar{a}$,
|
|
|
|
on which two processes can be synchronized. Here, the ``synchronization''
|
|
|
|
simply means that a call to the channel is blocking until its dual channel has
|
|
|
|
been called as well.
|
2016-08-10 17:57:58 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
The language is simply typed as in figure~\ref{fig:lccs:typing}. Note that
|
|
|
|
binary operators split their environment between their two operands, ensuring
|
|
|
|
that each identifier is used at most once, and that no rules (in particular the
|
|
|
|
axiom rules) ``forget'' any part of the environment, ensuring that each
|
2016-08-20 20:00:32 +02:00
|
|
|
identifier is used at least once. For instance, in $\Gamma =
|
|
|
|
\left[p:\proc,\,q:\proc\right]$, $0$ cannot be typed (\ie{} $\overline{\Gamma
|
|
|
|
\vdash 0 : \proc}$ is not a valid rule).
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
\begin{figure}[h]
|
|
|
|
\begin{align*}
|
2016-08-12 17:29:29 +02:00
|
|
|
\frac{~}{\,\vdash 0:\proc} & (\textit{Ax}_0) &
|
|
|
|
\frac{~}{\,\vdash 1:\proc} & (\textit{Ax}_1) &
|
|
|
|
\frac{~}{t:A \vdash t:A} & (\textit{Ax}) &
|
2016-08-10 17:57:58 +02:00
|
|
|
\frac{\Gamma, a:\chan, \bar{a}:\chan \vdash P : \proc}
|
|
|
|
{\Gamma \vdash (\nu a) P : \proc}
|
|
|
|
& (\nu)
|
|
|
|
\end{align*}\vspace{-1.5em}\begin{align*}
|
|
|
|
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
|
|
|
|
{\Gamma,\Delta \vdash P \parallel Q : \proc}
|
|
|
|
& (\parallel) &
|
|
|
|
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
|
|
|
|
{\Gamma,\Delta \vdash P \cdot Q : \proc}
|
|
|
|
& (\cdot_\proc) &
|
|
|
|
\frac{\Gamma \vdash P : \proc}
|
|
|
|
{\Gamma,a:\chan \vdash a \cdot P: \proc}
|
|
|
|
& (\cdot_\chan)
|
|
|
|
\end{align*} \vspace{-1.5em}
|
2016-08-12 17:29:29 +02:00
|
|
|
\caption{\linccs{} typing rules}\label{fig:lccs:typing}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{figure}
|
|
|
|
|
|
|
|
We also equip this language with operational semantics, in the form of a
|
|
|
|
labeled transition system (LTS), as described in figure~\ref{fig:lccs:opsem},
|
|
|
|
where $a$ denotes a channel and $x$ denotes any possible label.
|
|
|
|
|
|
|
|
\begin{figure}[h]
|
|
|
|
\begin{align*}
|
|
|
|
\frac{~}{a \cdot P \redarrow{a} P} & &
|
|
|
|
\frac{~}{1 \parallel P \redarrow{\tau} P} & &
|
|
|
|
\frac{~}{1 \cdot P \redarrow{\tau} P} & &
|
2016-08-12 17:29:29 +02:00
|
|
|
\frac{P \redarrow{\tau_c} Q}
|
|
|
|
{(\nu a) P \redarrow{\tau} Q} & (c \in \set{a,\bar{a}})&
|
2016-08-10 17:57:58 +02:00
|
|
|
\frac{P \redarrow{a} P'\quad Q \redarrow{\bar{a}} Q'}
|
2016-08-12 17:29:29 +02:00
|
|
|
{P \parallel Q \redarrow{\tau_a} P' \parallel Q'}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{align*}\begin{align*}
|
|
|
|
\frac{P \redarrow{x} P'}
|
|
|
|
{P \parallel Q \redarrow{x} P' \parallel Q} & &
|
|
|
|
\frac{Q \redarrow{x} Q'}
|
|
|
|
{P \parallel Q \redarrow{x} P \parallel Q'} & &
|
|
|
|
\frac{P \redarrow{x} P'}
|
|
|
|
{P \cdot Q \redarrow{x} P' \cdot Q} & &
|
|
|
|
\frac{P \redarrow{x} P'}
|
2016-08-12 17:29:29 +02:00
|
|
|
{(\nu a)P \redarrow{x} (\nu a)P'} & (a \not\in \set{x,\tau_a})
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{align*}
|
2016-08-12 17:29:29 +02:00
|
|
|
\caption{\linccs{} operational semantics}\label{fig:lccs:opsem}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{figure}
|
|
|
|
|
|
|
|
We consider that a term $P$ \emph{converges} whenever $P \redarrow{\tau}^\ast
|
|
|
|
1$, and we write $P \Downarrow$.
|
2016-08-09 21:30:22 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
The $\tau_a$ reduction scheme may sound a bit unusual. It is, however,
|
|
|
|
necessary. Consider the reduction of $(\nu a) (a \cdot 1 \parallel \bar{a}
|
|
|
|
\cdot 1)$: the inner term $\tau_a$-reduces to $1$, thus allowing the whole term
|
|
|
|
to reduce to $1$; but if we replaced that $\tau_a$ with a $\tau$, the whole
|
2016-08-20 20:00:32 +02:00
|
|
|
term would reduce to $(\nu a) 1$, which has no valid type since $a$ and
|
|
|
|
$\bar{a}$ are not consumed (linearity). Our semantics would then not satisfy
|
|
|
|
subject reduction.
|
2016-08-12 17:29:29 +02:00
|
|
|
|
|
|
|
Switching to an \emph{affine} \linccs{} while keeping it wrapped in a
|
2016-08-20 20:00:32 +02:00
|
|
|
\emph{linear} \lcalc{} was considered, but yielded way too many problems, while
|
2016-08-12 17:29:29 +02:00
|
|
|
switching to a fully-affine model would have modified the problem too deeply.
|
|
|
|
|
2016-08-09 21:30:22 +02:00
|
|
|
\subsection{Lifting to the higher order: linear \lcalc}
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
In order to reach the studied language, \llccs, we have to lift up \linccs{} to
|
|
|
|
a \lcalc. To do so, we add to the language the constructions of
|
2016-08-10 17:57:58 +02:00
|
|
|
figure~\ref{fig:llam:syntax}, which are basically the usual \lcalc{}
|
|
|
|
constructions slightly transformed to be linear (which is mostly reflected by
|
2016-08-20 20:00:32 +02:00
|
|
|
the typing rules). In particular, the only base types are $\proc$ and $\chan$.
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
\begin{figure}[h]
|
|
|
|
\begin{minipage}[t]{0.55\textwidth}
|
|
|
|
\begin{center}Terms\end{center}\vspace{-1em}
|
|
|
|
\begin{align*}
|
|
|
|
t,u,\ldots ::=~&x \in \mathbb{V} & \text{(variable)}\\
|
|
|
|
\vert~&t~u & \text{(application)}\\
|
|
|
|
\vert~&\lambda x^A \cdot t & \text{(abstraction)}\\
|
2016-08-12 17:29:29 +02:00
|
|
|
\vert~&\text{\linccs}\textit{ constructions} &
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{align*}
|
|
|
|
\end{minipage} \hfill \begin{minipage}[t]{0.40\textwidth}
|
|
|
|
\begin{center}Types\end{center}\vspace{-1em}
|
|
|
|
\begin{align*}
|
|
|
|
A,B,\ldots ::=~&A \linarrow B & \text{(linear arrow)}\\
|
2016-08-12 17:29:29 +02:00
|
|
|
\vert~&\proc~\vert~\chan & \text{(\linccs)}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{align*}
|
|
|
|
\end{minipage}
|
|
|
|
\caption{Linear \lcalc{} terms and types}\label{fig:llam:syntax}
|
|
|
|
\end{figure}
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
To keep the language simple and ease the implementation, the
|
|
|
|
$\lambda$-abstractions are annotated with the type of their abstracted
|
|
|
|
variable. The usual $\rightarrow$ symbol was also changed to $\linarrow$, to
|
|
|
|
clearly remind that the terms are \emph{linear}.
|
|
|
|
|
|
|
|
\smallskip
|
|
|
|
|
|
|
|
In order to enforce the linearity, the only typing rules of the usual \lcalc{}
|
|
|
|
that have to be changed are the $(\textit{Ax})$ and $(\textit{App})$ presented
|
|
|
|
in figure~\ref{fig:llam:typing}. The $(\textit{Abs})$ rule is the usual one.
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
\begin{figure}[h]
|
|
|
|
\begin{align*}
|
|
|
|
\frac{~}{x : A \vdash x : A} & (\textit{Ax}) &
|
|
|
|
\frac{\Gamma \vdash t : A \linarrow B \quad \Delta \vdash u : A}
|
|
|
|
{\Gamma,\Delta \vdash t~u : B} & (\textit{App}) &
|
2016-08-12 17:29:29 +02:00
|
|
|
\frac{\Gamma, x : A \vdash t : B}
|
|
|
|
{\Gamma \vdash \lambda x^{A} \cdot t : A \linarrow B} & (Abs)
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{align*}
|
2016-08-12 17:29:29 +02:00
|
|
|
\caption{Linear \lcalc{} typing rules}\label{fig:llam:typing}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{figure}
|
|
|
|
|
|
|
|
The linearity is here guaranteed: in the (\textit{Ax}) rule, the environment
|
|
|
|
must be $x:A$ instead of the usual $\Gamma, x:A$, ensuring that each variable
|
|
|
|
is used \emph{at least once}; while the environment split in the binary
|
|
|
|
operators' rules ensures that each variable is used \emph{at most once}
|
|
|
|
(implicitly, $\Gamma \cap \Delta = \emptyset$).
|
|
|
|
|
|
|
|
To lift the operational semantics to \llccs, we only need to add one rule:
|
|
|
|
\[
|
|
|
|
\frac{P \longrightarrow_\beta P'}{P \redarrow{\tau} P'}
|
|
|
|
\]
|
|
|
|
|
|
|
|
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\subsection{Examples}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\begin{itemize}
|
|
|
|
\item Simple channel usage: $T_1 \eqdef \newch{a} (a \cdot 1 \parallel
|
|
|
|
\bar{a} \cdot 1)$. This term converges: $\left(a \cdot 1 \parallel
|
|
|
|
\bar{a} \cdot 1 \right) \redarrow{\tau_a} 1 \parallel 1$, thus $T_1
|
|
|
|
\redarrow{\tau} \left(1 \parallel 1\right) \redarrow{\tau} 1$.
|
|
|
|
|
|
|
|
\item Deadlock: $T_2 \eqdef \newch{a} (a \cdot \bar{a} \cdot 1)$. This term
|
|
|
|
does not reduce at all: no reduction is possible under the $\nu$.
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\item Simple function call: $T_3 \eqdef ((\lambda x^P \cdot x) 1)
|
|
|
|
\parallel 1$ reduces to $1 \parallel 1$.
|
|
|
|
|
|
|
|
\item Channel passing: $T_4 \eqdef
|
|
|
|
\newch{f}\newch{g} \left(f \cdot 1 \parallel \left(\left(
|
|
|
|
\lambda a^C \cdot \lambda b^C \cdot \lambda c^C
|
|
|
|
\cdot \left(\left(a \cdot 1\right) \cdot \left(b \cdot 1\right)\right)
|
|
|
|
\parallel \left(c \cdot
|
|
|
|
1\right)\right)\,\bar{f}\,\bar{g}\,g\right)\right)$, which
|
|
|
|
$\beta$-reduces (and thus $\tau$-reduces) to
|
|
|
|
$\newch{f}\newch{g} \left(\left(f\cdot1\right) \parallel
|
|
|
|
\left(\left(\left(\bar{f} \cdot 1\right) \cdot
|
|
|
|
\left(\bar{g} \cdot 1\right)\right) \parallel \left(g \cdot 1\right)
|
|
|
|
\right)\right)$. Note that the function has no idea whether two
|
|
|
|
channels are dual or not.
|
2016-08-14 18:18:28 +02:00
|
|
|
\end{itemize}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\section{A games model}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
Our goal is now to give a games model for the above language. For that purpose,
|
|
|
|
we will use \emph{event structures}, providing an alternative formalism to
|
|
|
|
the often-used tree-like games.
|
|
|
|
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\subsection{The event structures framework}
|
2016-07-20 00:26:19 +02:00
|
|
|
|
2016-07-21 19:10:02 +02:00
|
|
|
The traditional approach to concurrent games is to represent them as
|
|
|
|
\emph{tree-like games}. If the considered game consists in three moves, namely
|
2016-08-20 20:00:32 +02:00
|
|
|
$A$, $B$ and $C$, where $A$ can be played by Opponent and either one of the
|
|
|
|
others by Player \emph{after} Opponent has played $A$, that means that the
|
|
|
|
states of the game will be $\epsilon$, $A$, $A \cdot B$ and $A \cdot C$, which
|
|
|
|
corresponds to the game tree
|
2016-07-21 19:10:02 +02:00
|
|
|
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
2016-08-22 10:50:23 +02:00
|
|
|
\node (1) at (0.5,1.2) {A} ;
|
|
|
|
\node (2) at (0,0) {B};
|
|
|
|
\node (3) at (1,0) {C};
|
2016-07-21 19:10:02 +02:00
|
|
|
|
|
|
|
\path [->]
|
|
|
|
(1) edge (2)
|
|
|
|
edge (3);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
|
|
|
|
This can of course be used to describe much larger games, and is often useful
|
2016-08-20 20:00:32 +02:00
|
|
|
to reason concurrently, since the causal histories appears clearly: the
|
|
|
|
possible states of the game can be read easily by concatenating the events
|
|
|
|
found along a path from the root of the tree.
|
2016-07-21 19:10:02 +02:00
|
|
|
|
|
|
|
But it also has the major drawback of growing exponentially in size: let us
|
|
|
|
consider a game in which Opponent must play $A$ and $B$ in no particular order
|
|
|
|
before Player can play $C$. The corresponding tree-like game would be
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node (11) {$A_1$};
|
|
|
|
\node (21) [below of=11] {$B_1$};
|
|
|
|
\node (31) [below of=21] {$C_1$};
|
|
|
|
\node (22) [right of=11] {$B_2$};
|
|
|
|
\node (12) [below of=22] {$A_2$};
|
|
|
|
\node (32) [below of=12] {$C_2$};
|
|
|
|
|
|
|
|
\path [->]
|
|
|
|
(11) edge (21)
|
|
|
|
(21) edge (31)
|
|
|
|
(22) edge (12)
|
|
|
|
(12) edge (32);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
This goes even worse with less structure: since there is $n!$ % chktex 40
|
2016-08-20 20:00:32 +02:00
|
|
|
permutations for $n$ elements, the tree can grow way larger.
|
2016-08-12 17:29:29 +02:00
|
|
|
|
|
|
|
This problem motivated the use of \emph{event structures} as a formalism to
|
|
|
|
describe such games~\cite{rideau2011concurrent}. Informally, an event structure
|
|
|
|
is a partial order $\leq$ on \emph{events} (here, the game's moves), alongside
|
|
|
|
with a \emph{consistency} relation.
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
The purpose of the consistency relation is to describe non-determinism, in
|
|
|
|
which we are not interested here, since we seek a deterministic model: in all
|
|
|
|
the following constructions, I will omit the consistency set. The original
|
|
|
|
constructions including it can be found for instance
|
|
|
|
in~\cite{castellan2016concurrent,winskel1986event}.
|
2016-07-21 19:10:02 +02:00
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
The partial order $e_1 \leq e_2$ means that $e_1$ must have been played before
|
|
|
|
$e_2$ can be played. For instance, the Hasse diagram of the previous game would
|
|
|
|
look like
|
2016-07-21 19:10:02 +02:00
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node (1) {A};
|
|
|
|
\node (2) [right of=1] {B};
|
|
|
|
\node (3) [below left of=1, below right of=2] {C};
|
|
|
|
\path[->]
|
|
|
|
(1) edge (3)
|
|
|
|
(2) edge (3);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%
|
|
|
|
\subsubsection{Event structures}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
|
|
|
\begin{definition}[event structure]
|
2016-08-10 17:57:58 +02:00
|
|
|
An \emph{event structure}~\cite{winskel1986event} is a poset $(E, \leq_E)$,
|
2016-08-12 17:29:29 +02:00
|
|
|
where $E$ is a set of \emph{events} and $\leq_E$ is a partial order on $E$
|
|
|
|
such that for all $e \in E$, $\downclose{e} \eqdef \set{e' \in E~\vert~e'
|
|
|
|
\leq_E e}$ is finite.
|
2016-07-12 14:49:11 +02:00
|
|
|
|
|
|
|
The partial order $\leq_E$ naturally induces a binary relation $\edgeArrow$
|
|
|
|
over $E$ that is defined as the transitive reduction of $\leq_E$.
|
|
|
|
\end{definition}
|
|
|
|
|
|
|
|
In this context, the right intuition of event structures is a set of events
|
|
|
|
that can occur, the players' moves, alongside with a partial order stating that
|
2016-08-10 17:57:58 +02:00
|
|
|
a given move cannot occur before another move.
|
2016-07-20 00:26:19 +02:00
|
|
|
|
2016-07-12 14:49:11 +02:00
|
|
|
Event structures are often represented as a directed acyclic graph (DAG)
|
|
|
|
where the vertices are the elements of $E$ and the edges are the transitive
|
2016-08-10 17:57:58 +02:00
|
|
|
reduction of $\leq_E$.
|
2016-07-12 14:49:11 +02:00
|
|
|
|
|
|
|
\begin{definition}[event structure with polarities]
|
|
|
|
An \emph{event structure with polarities} (\textit{ESP}) is an event
|
2016-07-20 00:26:19 +02:00
|
|
|
structure $(E, \leq_E, \rho)$, where $\rho : E \to \set{+,-}$ is a
|
2016-07-12 14:49:11 +02:00
|
|
|
function associating a \emph{polarity} to each event.
|
|
|
|
\end{definition}
|
|
|
|
|
2016-08-08 14:44:07 +02:00
|
|
|
In order to model games, this is used to represent whether a move is to be
|
2016-08-10 17:57:58 +02:00
|
|
|
played by Player or Opponent. To represent polarities, we will often use colors
|
|
|
|
instead of $+$ and $-$ signs: a red-circled event will have a negative
|
|
|
|
polarity, \ie{} will be played by Opponent, while a green-circled one will have
|
|
|
|
a positive polarity.
|
|
|
|
|
|
|
|
The ESP of the previous example would then be
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
2016-08-22 10:50:23 +02:00
|
|
|
\node (1) at (0,1.6) [draw=red, ellipse] {A};
|
|
|
|
\node (2) at (1,1.6) [draw=red, ellipse] {B};
|
|
|
|
\node (3) at (0.6,0) [draw=green, ellipse] {C};
|
2016-08-10 17:57:58 +02:00
|
|
|
\path[->]
|
|
|
|
(1) edge (3)
|
|
|
|
(2) edge (3);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
|
2016-07-12 14:49:11 +02:00
|
|
|
|
|
|
|
\begin{definition}[configuration]
|
2016-08-10 17:57:58 +02:00
|
|
|
A \emph{configuration} of an ESP $A$ is a finite subset $X \subseteq A$
|
|
|
|
that is \emph{down-closed}, \ie{}
|
|
|
|
\vspace{-0.5em}
|
|
|
|
\[ {\forall x \in X}, {\forall e \in A}, {e \leq_A x} \implies {e \in X}.\]
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-08-21 00:41:37 +02:00
|
|
|
We write $\config(A)$ the set of configurations of $A$.
|
2016-07-12 14:49:11 +02:00
|
|
|
\end{definition}
|
|
|
|
|
2016-08-21 00:41:37 +02:00
|
|
|
A configuration can thus be seen as a valid state of the game. The set
|
|
|
|
$\config(A)$ plays a major role in definitions and proofs on games and
|
|
|
|
strategies.
|
2016-07-19 11:50:11 +02:00
|
|
|
|
2016-07-12 14:49:11 +02:00
|
|
|
\begin{notation}
|
2016-08-12 17:29:29 +02:00
|
|
|
For $x,y \in \config(A)$, $x \forkover{e} y$ states that $y =
|
|
|
|
x \sqcup \set{e}$ (and that both are valid configurations), where $\sqcup$
|
|
|
|
denotes the disjoint union. It is also possible to write $x \forkover{e}$,
|
|
|
|
stating that $x \sqcup \set{e} \in \config(A)$, or $x \fork y$.
|
2016-07-12 14:49:11 +02:00
|
|
|
\end{notation}
|
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%
|
|
|
|
\subsubsection{Concurrent games}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
|
|
|
\begin{definition}[game]
|
|
|
|
A \emph{game} $A$ is an event structure with polarities. \\
|
|
|
|
The dual game $A^\perp$ is the game $A$ where all the polarities in
|
|
|
|
$\rho$ have been reversed.
|
|
|
|
\end{definition}
|
|
|
|
|
2016-07-19 11:50:11 +02:00
|
|
|
For instance, one could imagine a game modeling the user interface of a coffee
|
|
|
|
machine: Player is the coffee machine, while Opponent is a user coming to buy a
|
|
|
|
drink.
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{example}[coffee machine]
|
|
|
|
A game describing a coffee machine could be the following one:
|
|
|
|
\[ \begin{tikzpicture}
|
|
|
|
\node (1) at (2,1.5) [draw=red, ellipse] {coin};
|
|
|
|
\node (2) at (-1,1.5) [draw=red, ellipse] {getCoffee};
|
|
|
|
\node (3) at (5,1.5) [draw=red, ellipse] {getTea};
|
|
|
|
|
|
|
|
\node (4) at (0,0) [draw=green, ellipse] {giveCoffee};
|
|
|
|
\node (5) at (4,0) [draw=green, ellipse] {giveTea};
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
|
|
|
|
Note that there are no edges at all. Indeed, we are here describing the
|
|
|
|
\emph{game}, that is, giving a structure on which we can model any software
|
|
|
|
that would run on the hardware of the coffee machine. Nothing is hardwired
|
|
|
|
that would make it mandatory to insert a coin before getting a coffee: the
|
|
|
|
\emph{software} decides that, it is thus up to the \emph{strategy} --- of
|
|
|
|
which we will talk later on --- to impose such constraints.
|
|
|
|
\end{example}
|
|
|
|
|
|
|
|
\begin{example}[process game]
|
2016-08-12 17:29:29 +02:00
|
|
|
We can represent a process by the following game:
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node (1) at (0,0) [draw=red,ellipse] {call};
|
|
|
|
\node (2) at (2,0) [draw=green,ellipse] {done};
|
|
|
|
\path[->]
|
|
|
|
(1) edge (2);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
The ``call'' event will be triggered by Opponent (the system) when the
|
|
|
|
process is started, and Player will play ``done'' when the process has
|
|
|
|
finished, if it ever does. The relation $\text{call} \leq \text{done}$
|
2016-08-20 20:00:32 +02:00
|
|
|
means that a process cannot finish \emph{before} it is called: unlike what
|
|
|
|
happened in the previous example, it is here a ``hardwired'' relation that
|
|
|
|
the software cannot bypass.
|
2016-07-19 11:50:11 +02:00
|
|
|
\end{example}
|
|
|
|
|
2016-07-12 14:49:11 +02:00
|
|
|
\begin{definition}[pre-strategy]
|
2016-08-20 20:00:32 +02:00
|
|
|
A \emph{pre-strategy} on the game $A$, written $\sigma: A$, is an ESP such
|
|
|
|
that
|
2016-07-12 14:49:11 +02:00
|
|
|
\begin{enumerate}[(i)]
|
2016-08-10 17:57:58 +02:00
|
|
|
\item $\sigma \subseteq A$;
|
|
|
|
\item $\config(\sigma) \subseteq \config(A)$;
|
|
|
|
\item $\forall s \in \sigma, \rho_A(s) = \rho_\sigma(s)$
|
2016-07-12 14:49:11 +02:00
|
|
|
\end{enumerate}
|
|
|
|
\end{definition}
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
In particular, (ii) imposes that $\leq_A$ restrained to $\sigma$ is included in
|
|
|
|
$\leq_\sigma$.
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
\begin{example}[processes, cont.]
|
|
|
|
A possible \emph{pre-strategy} for the game consisting in two processes put
|
|
|
|
side by side (in which the game's events are annotated with a number to
|
|
|
|
distinguish the elements of the two processes) would be
|
|
|
|
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node (1) at (0,1.2) [draw=red,ellipse] {call$_0$};
|
|
|
|
\node (2) at (0,0) [draw=green,ellipse] {done$_0$};
|
|
|
|
\node (3) at (2,1.2) [draw=red,ellipse] {call$_1$};
|
2016-07-19 11:50:11 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
\path[->]
|
|
|
|
(1) edge (2)
|
|
|
|
(3) edge (2);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
2016-07-19 11:50:11 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
This pre-strategy is valid: it is a subset of the game that does not
|
|
|
|
include $\text{call}_1$, but it does include both $\text{call}_0$ and
|
|
|
|
$\text{done}_0$ and inherits the game's partial order.
|
2016-07-19 11:50:11 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
This would describe two processes working in parallel. The process $0$
|
|
|
|
waits before the process $1$ is called to terminate, and the process $1$
|
2016-08-20 20:00:32 +02:00
|
|
|
never returns. Assuming that \lstc{called} is an initially false boolean
|
|
|
|
shared between the two processes, this could for instance be written
|
|
|
|
|
|
|
|
\begin{minipage}[T]{0.45\textwidth}
|
|
|
|
\begin{center}\textit{Process 0}\end{center}
|
|
|
|
\begin{lstlisting}[language=C++]
|
|
|
|
int main() {
|
|
|
|
while(!called) {}
|
|
|
|
return 0;
|
|
|
|
}\end{lstlisting}
|
|
|
|
\end{minipage}
|
|
|
|
\begin{minipage}[T]{0.45\textwidth}
|
|
|
|
\begin{center}\textit{Process 1}\end{center}
|
|
|
|
\begin{lstlisting}[language=C++]
|
|
|
|
int main() {
|
|
|
|
called=true;
|
|
|
|
while(true) {}
|
|
|
|
// never returns.
|
|
|
|
}\end{lstlisting}
|
|
|
|
\end{minipage}
|
2016-08-10 17:57:58 +02:00
|
|
|
\end{example}
|
|
|
|
|
|
|
|
But as it is defined, a pre-strategy does not exactly capture what we expect of
|
2016-08-12 17:29:29 +02:00
|
|
|
a \emph{strategy}: it is too expressive. For instance, the relation
|
|
|
|
$\text{call}_0 \leq \text{call}_1$ on the above strategy is allowed, stating
|
|
|
|
that the operating system cannot decide to start the process $1$ before the
|
|
|
|
process $0$. It is not up to the program to decide that, this strategy is thus
|
|
|
|
unrealistic. We then have to restrict pre-strategies to \emph{strategies}:
|
2016-07-19 11:50:11 +02:00
|
|
|
|
2016-07-12 14:49:11 +02:00
|
|
|
\begin{definition}[strategy]
|
2016-08-10 17:57:58 +02:00
|
|
|
A \emph{strategy} is a pre-strategy $\sigma : A$ that
|
2016-07-12 14:49:11 +02:00
|
|
|
``behaves well'', \ie{} that is
|
|
|
|
\begin{enumerate}[(i)]
|
|
|
|
\item\label{def:receptive}
|
|
|
|
\textit{receptive}: for all $x \in \config(A)$ \st{}
|
2016-08-12 17:29:29 +02:00
|
|
|
$x \forkover{e^-}$, $e \in \sigma$;
|
2016-07-12 14:49:11 +02:00
|
|
|
|
|
|
|
\item\label{def:courteous}
|
2016-08-10 17:57:58 +02:00
|
|
|
\textit{courteous}: $\forall x \edgeArrow_\sigma x' \in \sigma$,
|
|
|
|
$(\rho(x),\rho(x')) \neq (-,+) \implies
|
|
|
|
x \edgeArrow_A x'$.
|
2016-07-12 14:49:11 +02:00
|
|
|
\end{enumerate}
|
|
|
|
\end{definition}
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
(\ref{def:receptive}) captures the idea that we cannot prevent Opponent from
|
|
|
|
playing one of its moves. Indeed, not including an event in a strategy means
|
2016-08-10 17:57:58 +02:00
|
|
|
that this event \emph{will not} be played. It is unreasonable to consider that
|
2016-08-20 20:00:32 +02:00
|
|
|
a strategy could forbid Opponent to play a given move, unless the game itself
|
|
|
|
forbids that as well.
|
2016-08-10 17:57:58 +02:00
|
|
|
|
|
|
|
(\ref{def:courteous}) states that unless a dependency relation is imposed by
|
|
|
|
the games' rules, one can only make one of its moves depend on an Opponent
|
|
|
|
move, \ie{} every direct arrow in the partial order that is not inherited from
|
|
|
|
the game should be ${\ominus \edgeArrow \oplus}$. Clearly, it is unreasonable
|
|
|
|
to consider an arrow ${\ostar \edgeArrow \ominus}$, which would mean forcing
|
2016-08-12 17:29:29 +02:00
|
|
|
Opponent to wait for a move (either from Player or Opponent) before playing
|
|
|
|
their move; but ${\oplus \edgeArrow \oplus}$ is also unreasonable, since we're
|
2016-08-10 17:57:58 +02:00
|
|
|
working in a concurrent context. Intuitively, one could think that when playing
|
|
|
|
$x$ then $y$, it is undefined whether Opponent will receive $x$ then $y$ or $y$
|
|
|
|
then $x$.
|
|
|
|
|
|
|
|
%%%%%
|
2016-08-14 18:18:28 +02:00
|
|
|
\subsubsection{Operations on games and strategies}\label{sssec:es_operations}
|
2016-08-12 17:29:29 +02:00
|
|
|
|
2016-08-10 19:45:28 +02:00
|
|
|
In order to manipulate strategies and define them by induction over the syntax,
|
|
|
|
the following operations will be extensively used. It may also be worth noting
|
|
|
|
that in the original formalism~\cite{castellan2016concurrent}, games,
|
|
|
|
strategies and maps between them form a bicategory in which these operations
|
|
|
|
play special roles.
|
2016-07-20 00:26:19 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
In this whole section, unless stated otherwise, $E$ and $F$ denotes ESPs; $A$,
|
|
|
|
$B$ and $C$ denotes games; $\sigma: A$ and $\tau: B$ denotes strategies.
|
2016-07-20 00:26:19 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{definition}[parallel composition]
|
2016-07-20 00:26:19 +02:00
|
|
|
The \emph{parallel composition} $E \parallel F$ of two ESPs is an ESP
|
2016-08-10 19:45:28 +02:00
|
|
|
whose events are $\left(\set{0} \times E\right) \sqcup \left(\set{1} \times
|
2016-07-20 00:26:19 +02:00
|
|
|
F\right)$ (the disjoint tagged union of the events of $E$ and $F$), and
|
|
|
|
whose partial order is $\leq_E$ on $E$ and $\leq_F$ on $F$, with no
|
|
|
|
relation between elements of $E$ and $F$.
|
|
|
|
|
|
|
|
One can then naturally expand this definition to games (by preserving
|
|
|
|
polarities) and to strategies.
|
|
|
|
\end{definition}
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
In the example before, when talking of ``two processes side by side'', we
|
2016-08-20 20:00:32 +02:00
|
|
|
actually referred formally to the parallel composition of two processes, in
|
|
|
|
which we took the liberty of renaming the events for more clarity (which we
|
|
|
|
will often do).
|
2016-08-12 17:29:29 +02:00
|
|
|
|
|
|
|
\smallskip
|
2016-08-10 19:45:28 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
Given two strategies on dual games $A$ and $A^\perp$, it is natural and
|
|
|
|
interesting to compute their \emph{interaction}, that is, ``what will happen if
|
|
|
|
one strategy plays against the other''.
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{definition}[closed interaction]
|
2016-08-10 19:45:28 +02:00
|
|
|
Given two strategies $\sigma : A$ and $\tau : A^\perp$, their
|
|
|
|
\emph{interaction} $\sigma \wedge \tau$ is the ESP
|
2016-08-12 17:29:29 +02:00
|
|
|
$\sigma \cap \tau \subseteq A$ from which causal loops have been removed.
|
|
|
|
|
|
|
|
More precisely, $\sigma \cap \tau$ is a set adjoined with a \emph{preorder}
|
|
|
|
${(\leq_\sigma \cup \leq_\tau)}^\ast$ (transitive closure) that may not
|
|
|
|
respect antisymmetry, that is, may have causal loops. The event structure
|
|
|
|
$\sigma \wedge \tau$ is then obtained by removing all the elements
|
2016-08-20 20:00:32 +02:00
|
|
|
contained in such loops from $\sigma \cap \tau$, yielding a regular order.
|
2016-08-10 19:45:28 +02:00
|
|
|
\end{definition}
|
2016-08-14 18:18:28 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
\textit{This construction is a simplified version of the analogous one
|
|
|
|
from~\cite{castellan2016concurrent} (the pullback), taking advantage of the
|
|
|
|
fact that our event structures are deterministic --- that is, without a
|
|
|
|
consistency set.}
|
|
|
|
|
|
|
|
This indeed captures what we wanted: $\sigma \wedge \tau$ contains the moves
|
|
|
|
that both $\sigma$ and $\tau$ are ready to play, including both orders, except
|
|
|
|
for the events that can never be played because of a ``deadlock'' (\ie{} a
|
|
|
|
causal loop).
|
|
|
|
|
|
|
|
\smallskip
|
|
|
|
|
|
|
|
We might now try to generalize that to an \emph{open} case, where both
|
|
|
|
strategies don't play on the same games, but only have a common part. Our
|
2016-08-20 20:00:32 +02:00
|
|
|
goal here is to \emph{compose} strategies: indeed, a strategy on $A^\perp
|
2016-08-12 17:29:29 +02:00
|
|
|
\parallel B$ can be seen as a strategy \emph{from $A$ to $B$}, playing as
|
|
|
|
Opponent on a board $A$ and as Player on a board $B$. This somehow looks like a
|
|
|
|
function, that could be composed with another strategy on $B^\perp \parallel
|
2016-08-20 20:00:32 +02:00
|
|
|
C$ (as one would compose two mathematical functions $f$ and $g$ into $g \circ
|
|
|
|
f$).
|
2016-08-10 19:45:28 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{definition}[compositional interaction]
|
2016-08-10 19:45:28 +02:00
|
|
|
Given two strategies $\sigma : A^\perp \parallel B$ and $\tau : B^\perp
|
|
|
|
\parallel C$, their \emph{compositional interaction} $\tau \strInteract
|
2016-08-12 17:29:29 +02:00
|
|
|
\sigma$ is an event structure defined as $(\sigma \parallel C) \wedge
|
|
|
|
(A^\perp \parallel \tau)$, where $A^\perp$ and $C$ are seen as strategies.
|
2016-08-10 19:45:28 +02:00
|
|
|
\end{definition}
|
|
|
|
|
2016-08-11 12:49:40 +02:00
|
|
|
The idea is to put in correspondence the ``middle'' states (those of $B$) while
|
2016-08-20 20:00:32 +02:00
|
|
|
adding ``neutral'' states for $A$ and $C$, which gives us two strategies
|
|
|
|
playing on the same game (if we ignore polarities), $A \parallel B \parallel
|
|
|
|
C$.
|
2016-08-11 12:49:40 +02:00
|
|
|
|
2016-08-21 00:41:37 +02:00
|
|
|
Here, we define $\tau \strInteract \sigma$ as an \emph{event structure} (\ie,
|
|
|
|
without polarities): indeed, the two strategies disagree on the polarities of
|
|
|
|
the middle part. Alternatively, it can be seen as an ESP with a polarity
|
|
|
|
function over $\set{+,-,?}$.
|
2016-08-11 12:49:40 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
From this point, the notion of composition we sought is only a matter of
|
|
|
|
``hiding'' the middle part:
|
2016-08-11 12:49:40 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{definition}[strategies composition]
|
2016-08-12 17:29:29 +02:00
|
|
|
Given two strategies $\sigma : A^\perp \parallel B$ and $\tau : B^\perp
|
|
|
|
\parallel C$, their \emph{composition} $\tau \strComp \sigma$ is the ESP
|
|
|
|
$(\tau \strInteract \sigma) \cap (A^\perp \parallel C)$, on which the
|
|
|
|
partial order is the restriction of $\leq_{\tau \strInteract \sigma}$ and
|
|
|
|
the polarities those of $\sigma$ and $\tau$.
|
|
|
|
\end{definition}
|
|
|
|
|
|
|
|
It is then useful to consider an identity strategy \wrt{} the composition
|
2016-08-20 20:00:32 +02:00
|
|
|
operator. This identity is called the \emph{copycat} strategy: on a game $A$,
|
|
|
|
the copycat strategy playing on $A^\perp \parallel A$ replicates the moves
|
|
|
|
the other player from each board on the other.
|
2016-08-12 17:29:29 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{definition}[copycat]
|
2016-08-12 17:29:29 +02:00
|
|
|
The \emph{copycat strategy} of a game $A$, $\cc_A$, is the strategy on the
|
|
|
|
game $A^\perp \parallel A$ whose events are $A^\perp \parallel A$ wholly,
|
|
|
|
on which the order is the transitive closure of $\leq_{A^\perp \parallel A}
|
2016-08-14 18:18:28 +02:00
|
|
|
\cup \set{ (1-i, e) \leq (i, e) \vert e \in A~\&~\rho((i,e)) = \oplus}$.
|
2016-08-10 19:45:28 +02:00
|
|
|
\end{definition}
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
The copycat strategy of a game is indeed an identity for the composition of
|
|
|
|
\emph{strategies}. In fact, it even holds that for a \emph{pre-}strategy
|
|
|
|
$\sigma : A$, $\sigma$ is a strategy $\iff$ $\cc_A \strComp \sigma = \sigma$.
|
|
|
|
|
|
|
|
\begin{example}[copycat]
|
|
|
|
If we consider the following game $A$
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node (1) [draw=red,ellipse] {A};
|
|
|
|
\node (3) [draw=green,ellipse,right of=1] {C};
|
|
|
|
\node (2) [draw=red,ellipse,right of=3] {B};
|
|
|
|
\path[->]
|
|
|
|
(3) edge (1);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
|
|
|
its copycat strategy $\cc_A$ is
|
|
|
|
\[
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node (01) {($A^\perp$)};
|
|
|
|
\node (02) [below of=01] {($A$)};
|
|
|
|
\node (11) [draw=green,ellipse,right of=01] {A};
|
|
|
|
\node (31) [draw=red,ellipse,right of=11] {C};
|
|
|
|
\node (21) [draw=green,ellipse,right of=31] {B};
|
|
|
|
|
|
|
|
\node (12) [draw=red,ellipse,right of=02] {A};
|
|
|
|
\node (32) [draw=green,ellipse,right of=12] {C};
|
|
|
|
\node (22) [draw=red,ellipse,right of=32] {B};
|
|
|
|
\path[->]
|
|
|
|
(12) edge (11)
|
|
|
|
(22) edge (21)
|
|
|
|
(31) edge (32)
|
2016-08-20 20:00:32 +02:00
|
|
|
% (31) edge (11)
|
2016-08-12 17:29:29 +02:00
|
|
|
(32) edge (12);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
2016-08-20 20:00:32 +02:00
|
|
|
|
|
|
|
Note that the edge $C \rightarrow A$ in the upper row is no longer needed,
|
|
|
|
since it can be obtained transitively and we only represent the transitive
|
|
|
|
reduction of the partial order.
|
2016-08-12 17:29:29 +02:00
|
|
|
\end{example}
|
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-08-14 18:18:28 +02:00
|
|
|
\subsection{Interpretation of \llccs}\label{ssec:llccs_interp}
|
2016-07-18 12:50:51 +02:00
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
We can now equip \llccs{} with denotational semantics, interpreting the
|
|
|
|
language as strategies as defined in figure~\ref{fig:llccs:interp}.
|
|
|
|
|
|
|
|
\begin{figure}[h]
|
2016-08-12 19:12:53 +02:00
|
|
|
\begin{align*}
|
|
|
|
\seman{x^A} &\eqdef \cc_{\seman{A}} &
|
|
|
|
\seman{A \linarrow B} &\eqdef \seman{A}^\perp \parallel \seman{B} \\
|
2016-08-12 17:29:29 +02:00
|
|
|
\seman{t^{A \linarrow B}~u^{A}} &\eqdef
|
|
|
|
\cc_{A \linarrow B} \strComp \left( \seman{t} \parallel \seman{u}
|
|
|
|
\right) \\
|
2016-08-12 19:12:53 +02:00
|
|
|
\seman{\lambda x^A \cdot t} &\eqdef \seman{t} \\
|
2016-08-12 17:29:29 +02:00
|
|
|
\seman{P \parallel Q} &\eqdef \left(
|
|
|
|
\begin{tikzpicture}[baseline, scale=0.8]
|
|
|
|
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
|
|
|
|
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
|
|
|
|
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$};
|
|
|
|
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$};
|
|
|
|
\node (0) at (5,0.65) [draw=red,ellipse] {call};
|
|
|
|
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
|
|
|
|
\path[->]
|
|
|
|
(0) edge (1)
|
|
|
|
edge [bend right] (2)
|
|
|
|
edge [bend right] (4)
|
|
|
|
(2) edge (3)
|
|
|
|
(4) edge (5)
|
|
|
|
(3) edge [bend right] (1)
|
|
|
|
(5) edge [bend right] (1);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\right) \strComp \left(\seman{P} \parallel \seman{Q}\right) &
|
|
|
|
\seman{\proc} = \seman{\chan} &\eqdef \begin{tikzpicture}[baseline]
|
|
|
|
\node (1) at (0,0.5) [draw=red,ellipse] {call};
|
|
|
|
\node (2) at (0,-0.5) [draw=green,ellipse] {done};
|
|
|
|
\draw [->] (1) -- (2);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\seman{P \cdot Q} &\eqdef \left(
|
|
|
|
\begin{tikzpicture}[baseline,scale=0.8]
|
|
|
|
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
|
|
|
|
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
|
|
|
|
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$};
|
|
|
|
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$};
|
|
|
|
\node (0) at (5,0.65) [draw=red,ellipse] {call};
|
|
|
|
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
|
|
|
|
\path[->]
|
|
|
|
(0) edge (1)
|
|
|
|
edge [bend right] (4)
|
|
|
|
(2) edge (3)
|
|
|
|
(4) edge (5)
|
|
|
|
(3) edge [bend right] (1)
|
|
|
|
(5) edge (2);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\right) \strComp \left(\seman{P} \parallel \seman{Q}\right) &
|
|
|
|
\seman{1} &\eqdef \begin{tikzpicture}[baseline]
|
|
|
|
\node (1) at (0,0.5) [draw=red,ellipse] {call};
|
|
|
|
\node (2) at (0,-0.5) [draw=green,ellipse] {done};
|
|
|
|
\draw [->] (1) -- (2);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\seman{(a : \chan) \cdot P} &\eqdef \left(
|
|
|
|
\begin{tikzpicture}[baseline,scale=0.8]
|
|
|
|
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
|
|
|
|
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
|
|
|
|
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $a$};
|
|
|
|
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $a$};
|
|
|
|
\node (0) at (5,0.65) [draw=red,ellipse] {call};
|
|
|
|
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
|
|
|
|
\path[->]
|
|
|
|
(0) edge (1)
|
|
|
|
edge [bend right] (2)
|
|
|
|
(2) edge (3)
|
|
|
|
(4) edge (5)
|
|
|
|
(3) edge (4)
|
|
|
|
(5) edge [bend right] (1);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\right) \strComp \left(\seman{P} \parallel \seman{a}\right) &
|
|
|
|
\seman{0} &\eqdef \begin{tikzpicture}[baseline]
|
|
|
|
\node (1) at (0,0.2) [draw=red,ellipse] {call};
|
|
|
|
\end{tikzpicture}
|
|
|
|
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\seman{(\nu a) P} &\eqdef \left(
|
|
|
|
\begin{tikzpicture}[baseline,scale=0.8]
|
|
|
|
\node (6) at (0,0.65) [draw=green,ellipse] {call $a$};
|
|
|
|
\node (7) at (0,-0.65) [draw=red,ellipse] {done $a$};
|
|
|
|
\node (4) at (2.5,0.65) [draw=green,ellipse] {call $\bar{a}$};
|
|
|
|
\node (5) at (2.5,-0.65) [draw=red,ellipse] {done $\bar{a}$};
|
|
|
|
\node (2) at (5,0.65) [draw=green,ellipse] {call $P$};
|
|
|
|
\node (3) at (5,-0.65) [draw=red,ellipse] {done $P$};
|
|
|
|
\node (0) at (7.5,0.65) [draw=red,ellipse] {call};
|
|
|
|
\node (1) at (7.5,-0.65) [draw=green,ellipse] {done};
|
|
|
|
\path[->]
|
|
|
|
(0) edge (1)
|
|
|
|
edge [bend right] (2)
|
|
|
|
(2) edge (3)
|
|
|
|
(3) edge [bend right] (1)
|
|
|
|
(4) edge (5)
|
|
|
|
edge (7)
|
|
|
|
(6) edge (7)
|
|
|
|
edge (5);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\right) \strComp \seman{P} &
|
|
|
|
\end{align*}
|
|
|
|
\caption{\llccs{} interpretation as strategies}\label{fig:llccs:interp}
|
|
|
|
\end{figure}
|
|
|
|
|
2016-08-12 19:12:53 +02:00
|
|
|
A lot of these interpretations is what was expected: $\proc$ and $\chan$ have
|
|
|
|
the same interpretation as presented before, $1$ is interpreted as the game
|
|
|
|
playing \textit{done} while $0$ does not, the arrow is the type introduced for
|
|
|
|
functions.
|
|
|
|
|
|
|
|
A variable is represented by a \emph{copycat} of its type, to keep a version of
|
|
|
|
the variable in the environment, reflecting the judgement $x:A \vdash x:A$.
|
|
|
|
Same goes for the abstraction: the typing rule states that given a term
|
|
|
|
verifying $\Gamma, x:A \vdash t:B$, we should obtain a term verifying $\Gamma
|
|
|
|
\vdash \lambda x^A \cdot t : A \linarrow B$. The only thing we have to do is to
|
|
|
|
``move'' $x$ from the environment to the term, which is transcribed by the
|
|
|
|
associativity of $\parallel$. In the application, one can notice that the
|
|
|
|
copycat wires up perfectly $\seman{t} \parallel \seman{u}$: the version of $x$
|
|
|
|
from the environment of $t$ is connected to $u$, and the output of $t$ is
|
|
|
|
connected to the output of the term.
|
|
|
|
|
|
|
|
In the representation above, the drawn strategies are organized in columns: for
|
|
|
|
instance, the strategy involved in $\seman{P \parallel Q}$ has type $P^\perp
|
|
|
|
\parallel P^\perp \parallel P$. Each column of events stands for one of those
|
|
|
|
games, in this order.
|
|
|
|
|
2016-08-12 17:29:29 +02:00
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\subsection{Adequacy}
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
We will now prove the major result of this study, the \emph{adequacy} of the
|
|
|
|
game semantics.
|
2016-08-12 19:12:53 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{theorem}[adequacy]
|
2016-08-12 19:12:53 +02:00
|
|
|
The previous interpretation is \emph{adequate} with the operational
|
|
|
|
semantics, that is
|
2016-08-20 20:00:32 +02:00
|
|
|
\[ \forall P \textit{ st. } \left(\vdash P : \proc\right),~
|
2016-08-12 19:12:53 +02:00
|
|
|
\left(P \Downarrow\right) \iff \left(\seman{P} = \seman{1}\right) \]
|
|
|
|
\end{theorem}
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
In order to prove the theorem, a few intermediary definitions and results are
|
|
|
|
required.
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\begin{definition}[evaluation in a context]
|
2016-08-12 19:12:53 +02:00
|
|
|
For $l$ a list of channels and $P$ a term, the evaluation of $P$ in the
|
|
|
|
context $l$, $\seman{P}_l$, is defined by induction by $\seman{P}_{[]}
|
|
|
|
\eqdef \seman{P}$ and $\seman{P}_{h::t} \eqdef \seman{(\nu h)P}_t$.
|
|
|
|
\end{definition}
|
|
|
|
|
|
|
|
\begin{definition}[valid contexts]
|
|
|
|
The \emph{valid contexts} for a reduction $P \redarrow{x} Q$, $\validctx_{P
|
2016-08-14 18:18:28 +02:00
|
|
|
\redarrow{x} Q}$, is the set of (ordered) lists of channels $l$ such that
|
2016-08-12 19:12:53 +02:00
|
|
|
\begin{enumerate}[(i)]
|
|
|
|
\item $\forall a:\chan \in \freevars(P)~\st~\bar{a} \in \freevars(P),~
|
|
|
|
a \in l \text{ or } \bar{a} \in l$;
|
|
|
|
\item if $x = a : \chan$, $a \not\in l$;
|
|
|
|
\item $\forall a : \chan,~a \in l \implies \bar{a} \not\in l$.
|
|
|
|
\end{enumerate}
|
|
|
|
|
|
|
|
\end{definition}
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\begin{lemma}
|
|
|
|
For all $a: \chan$, $P, Q, R : \proc$, the following properties hold:
|
|
|
|
\begin{enumerate}[(i)]
|
|
|
|
\item $\newch{a} (P \parallel Q) = \left(\newch{a} P\right) \parallel
|
|
|
|
Q$ when $a,\bar{a} \not\in \freevars{Q}$;
|
|
|
|
\item $\newch{a} (P \parallel Q) = P \parallel \left(\newch{a}
|
|
|
|
Q\right)$ when $a,\bar{a} \not\in \freevars{P}$;
|
|
|
|
\item $\newch{a} \left(P\cdot Q\right) = \left(\newch{a} P\right) \cdot
|
|
|
|
Q$ when $a,\bar{a} \not\in \freevars{Q}$;
|
|
|
|
\item $\seman{\parallel}_l$ is associative, that is, for all $l$,
|
|
|
|
$\seman{(A \parallel B) \parallel C}_l = \seman{A \parallel (B
|
|
|
|
\parallel C)}_l$.
|
|
|
|
\end{enumerate}
|
|
|
|
\end{lemma}
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
The previous lemma's proof mostly consists in technical, formal reasoning on
|
|
|
|
event structures, but it is essentially intuitive.
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
The theorem is mostly a consequence of the following lemma:
|
2016-08-12 19:12:53 +02:00
|
|
|
|
|
|
|
\begin{lemma}
|
2016-08-20 20:00:32 +02:00
|
|
|
$\forall P \redarrow{x} Q,~\forall l \in \validctx_{P \redarrow{x} Q}$,
|
2016-08-14 18:18:28 +02:00
|
|
|
\begin{enumerate}[(i)]
|
2016-08-12 19:12:53 +02:00
|
|
|
\item if $x = \tau$, then $\seman{P}_l = \seman{Q}_l$;
|
2016-08-14 18:18:28 +02:00
|
|
|
\item if $x = a: \chan$, then $\seman{P}_l = \seman{a \cdot Q}_l$;
|
|
|
|
\item if $x = \tau_a$ ($a : \chan$), then $\seman{P}_{a::l} =
|
|
|
|
\seman{Q}_l$.
|
|
|
|
\end{enumerate}
|
2016-08-12 19:12:53 +02:00
|
|
|
\end{lemma}
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\begin{proof}
|
|
|
|
We prove this by induction over the rules of the operational semantics of
|
|
|
|
\llccs. Most of the cases are straightforward, thus, we will only sketch
|
|
|
|
the proof for a few of those cases.
|
|
|
|
|
|
|
|
\begin{itemize}
|
|
|
|
\item The basic rules (for $\parallel$, $\cdot$, \ldots) are working
|
|
|
|
thanks to the previous lemma.
|
|
|
|
\item $\dfrac{P \redarrow{a} P'\quad Q \redarrow{\bar{a}} Q'}
|
|
|
|
{P \parallel Q \redarrow{\tau_a} P' \parallel Q'}$:
|
|
|
|
for all $l$, by (ii), $\seman{P}_l = \seman{a \cdot P'}_l$ and
|
|
|
|
$\seman{Q} = \seman{\bar{a} \cdot Q'}$, thus $\seman{(\nu a)(P
|
|
|
|
\parallel Q)}_l = \seman{\newch{a}(a \cdot P' \parallel \bar{a}
|
|
|
|
\cdot Q')}$ (inline replacement of terms, permitted by the previous
|
|
|
|
lemma), thus $\seman{\newch{a} (P \parallel Q)}_l = \seman{P'
|
|
|
|
\parallel Q'}_l$.
|
|
|
|
\item $\dfrac{P \redarrow{\tau_c} Q}{\newch{a} P \redarrow{\tau} Q} (c
|
|
|
|
\in \set{a,\bar{a}})$ clearly works thanks to (iii).
|
|
|
|
\end{itemize}
|
|
|
|
\end{proof}
|
|
|
|
|
|
|
|
\begin{proof}[Proof: theorem]
|
|
|
|
\emph{Forwards implication}: $\left(P \Downarrow\right) \implies
|
|
|
|
\left(\seman{P} = \seman{1}\right)$.
|
|
|
|
Proof by induction over the derivation of $P \redarrow{\tau}^\ast 1$, by
|
|
|
|
iterating the previous lemma.
|
|
|
|
|
|
|
|
\emph{Backwards implication}: $\left(\seman{P} = \seman{1}\right) \implies
|
|
|
|
\left(P \Downarrow\right)$.
|
|
|
|
We prove its contrapositive judgement, $P \neq 1\,\&\,\seman{P} =
|
|
|
|
\seman{1} \implies \exists Q\,:\,P \redarrow{\tau} Q$ by induction over the
|
|
|
|
syntax of \linccs{} (\wlogen, we can assume that $P
|
2016-08-20 20:00:32 +02:00
|
|
|
\not\longrightarrow_\beta$, thus the terms are in \linccs): for each
|
|
|
|
syntactic construction, we prove that under the induction hypotheses, there
|
|
|
|
is such a $Q$.
|
2016-08-14 18:18:28 +02:00
|
|
|
\end{proof}
|
2016-08-10 17:57:58 +02:00
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
This proves the adequacy of our semantics, giving some credit to the game
|
|
|
|
semantics we provided: indeed, in order to decide whether a term converges, we
|
|
|
|
can compute its associated strategy and check whether it is $\seman{1}$.
|
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-07-18 12:50:51 +02:00
|
|
|
\section{Implementation of deterministic concurrent games}
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\hfill\href{https://tobast.fr/l3/demo.html}
|
|
|
|
{\includegraphics[height=2em]{tryme32.png}~\raisebox{0.5em}{Try online}}
|
|
|
|
\qquad \href{https://github.com/tobast/cam-strategies/}
|
2016-07-18 13:15:09 +02:00
|
|
|
{\includegraphics[height=2em]{github32.png}~\raisebox{0.5em}{Github
|
|
|
|
repository}}
|
2016-07-18 12:50:51 +02:00
|
|
|
|
|
|
|
\vspace{1em}
|
|
|
|
|
2016-08-21 00:41:37 +02:00
|
|
|
One of the goals of my internship was also to implement the operations on
|
2016-08-14 18:18:28 +02:00
|
|
|
games and strategies described in §\ref{sssec:es_operations}, and to use them
|
|
|
|
to provide a convenient Dot representation of the operational semantics of
|
|
|
|
\llccs{} described in §\ref{ssec:llccs_interp}.
|
2016-07-18 12:50:51 +02:00
|
|
|
|
|
|
|
\subsection{Structures}
|
|
|
|
|
|
|
|
The implementation aims to stay as close as possible to the mathematical model,
|
|
|
|
while still providing quite efficient operations.
|
|
|
|
|
2016-07-18 14:18:58 +02:00
|
|
|
As we do not handle non-determinism, an event structure can be easily
|
|
|
|
represented as a DAG in memory. The actual representation that was chosen is a
|
|
|
|
set of nodes, each containing (as well as a few other information) a list of
|
|
|
|
incoming and outgoing edges.
|
|
|
|
|
2016-08-20 20:00:32 +02:00
|
|
|
\subsection{Generic operations}
|
|
|
|
|
|
|
|
The software --- apart from a few convenience functions used to input and
|
|
|
|
output games and strategies, as well as the \llccs{} handling --- mostly
|
|
|
|
consists in graph handling functions (mostly used internally) and the
|
|
|
|
implementation of the operations on games and strategies previously described.
|
|
|
|
|
|
|
|
This allows to compute the result of any operation on a deterministic strategy,
|
|
|
|
and is modular enough to make it possible to implement non-determinism on the
|
|
|
|
top of it later on (even without having to understand the whole codebase).
|
|
|
|
Those operations can be used directly from the OCaml toplevel, conveniently
|
|
|
|
initialized with the correct modules loaded with \lstbash{make toplevel}; but
|
|
|
|
it is mostly intended to be a backend for higher level interfaces, such as the
|
|
|
|
\llccs{} interface.
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\subsection{Modelling \llccs}
|
2016-07-20 15:18:33 +02:00
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
The modelling of \llccs{} required to implement a lexer/parser for the
|
|
|
|
language and a function transforming \llccs{} terms into strategies, as well as
|
|
|
|
a rendering backend, displaying a strategy as a Dot graph. This could then just
|
|
|
|
be plugged into an HTML/Javascript frontend using \texttt{js\_of\_ocaml}; this
|
|
|
|
frontend is linked above in the document.
|
2016-07-21 12:30:17 +02:00
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
The major difficulty came from the necessity to massively reorder the sub-terms
|
|
|
|
of the strategies on the go: indeed, in order to know how to compose two
|
|
|
|
strategies $\sigma$ and $\tau$, the implementation keeps track of the parallel
|
|
|
|
compositions that were taken to get both strategies. For instance, if $\sigma$
|
|
|
|
was obtained by putting in parallel strategies so that the game is $(A
|
|
|
|
\parallel B^\perp) \parallel C^\perp$, and $\tau$ was obtained the same way,
|
|
|
|
reaching a game $(B \parallel C) \parallel D$, the implementation would refuse
|
|
|
|
to compute $\tau \strComp \sigma$, because it would try to match games
|
|
|
|
$C^\perp$ and $(B \parallel C)$.
|
2016-07-21 12:30:17 +02:00
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
The theoretical construction extensively uses the associativity (up to
|
|
|
|
isomorphism) of $\parallel$, but in the code, each use of the associativity
|
|
|
|
must be explicit, leading to a large amount of code.
|
2016-07-21 12:30:17 +02:00
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-08-20 20:00:32 +02:00
|
|
|
\section{Conclusion}
|
2016-08-14 18:18:28 +02:00
|
|
|
|
|
|
|
During this internship, I have established deterministic game semantics for a
|
|
|
|
simple concurrent language. Although the language is fairly simple, it should
|
|
|
|
not be too hard to lift it to a language closer to real-world programming
|
|
|
|
languages, through the inclusion of the imperative primitives found in the
|
|
|
|
literature.
|
|
|
|
|
2016-08-21 01:50:16 +02:00
|
|
|
\medskip
|
|
|
|
|
|
|
|
I also explored the possibility to reach a full-abstraction result. The
|
|
|
|
full-abstraction property states that two terms are \emph{observationally
|
|
|
|
equivalent} if and only if their (denotational) semantics are equal, that is,
|
|
|
|
in this context,
|
|
|
|
for all $P,Q$ two \llccs{} terms such that $\Gamma \vdash P,Q : A$,
|
|
|
|
|
|
|
|
\[
|
|
|
|
\left[ \forall \calC[-] \text{ a context }\,:\,
|
|
|
|
\left( \vdash \calC[P] : \proc \right),~
|
|
|
|
\calC[P] \Downarrow \iff \calC[Q] \Downarrow
|
|
|
|
\right]
|
|
|
|
\quad\iff\quad
|
|
|
|
\left( \seman{P} = \seman{Q} \right)
|
|
|
|
\]
|
|
|
|
|
|
|
|
Yet, by lack of time, I had to abandon this path. Indeed, this would have
|
|
|
|
required either to modify \llccs{} or to restrict the authorized strategies,
|
|
|
|
because of the following legal strategy, which cannot be expressed as the
|
|
|
|
semantics of a term in \llccs{}:
|
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
\[ \begin{tikzpicture}
|
|
|
|
\node (0P) at (2,2) {$\proc$};
|
|
|
|
\node (0C) at (0,2) {$\chan$};
|
|
|
|
\node (1) at (0,1) [draw=red, ellipse] {call};
|
|
|
|
\node (2) at (0,0) [draw=green, ellipse] {done};
|
|
|
|
\node (3) at (2,1) [draw=red, ellipse] {call};
|
|
|
|
\node (4) at (2,0) [draw=green, ellipse] {done};
|
|
|
|
\path[->]
|
|
|
|
(1) edge (2)
|
|
|
|
(3) edge (4)
|
|
|
|
edge (1);
|
|
|
|
\end{tikzpicture}
|
|
|
|
\]
|
2016-07-18 17:27:34 +02:00
|
|
|
|
2016-08-14 18:18:28 +02:00
|
|
|
This strategy behaves like a ``forget'' strategy: its effect is to call one end
|
|
|
|
of a channel, and then to resume the execution of the term without waiting for
|
|
|
|
the other end to be called. If we integrate this operator to the language as
|
|
|
|
$(f\,a)$ for any channel $a$, this construction can discriminate terms that
|
|
|
|
would not have been discriminated before. For instance $\lambda x^{\chan
|
|
|
|
\linarrow \proc} \cdot \newch{a} ((x a) \cdot \bar{a} \cdot 1)$ and $\lambda
|
|
|
|
x^{\chan \linarrow \proc} \cdot \newch{a} ((x a) \cdot \bar{a} \cdot 0)$ can
|
|
|
|
be discriminated by the context $\calC[X] = X (\lambda a^\chan \cdot (f a))$.
|
2016-07-18 17:27:34 +02:00
|
|
|
|
2016-08-10 17:57:58 +02:00
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
2016-07-18 12:50:51 +02:00
|
|
|
|
|
|
|
\bibliography{biblio}
|
2016-08-14 18:18:28 +02:00
|
|
|
\bibliographystyle{alpha}
|
2016-07-12 14:49:11 +02:00
|
|
|
|
2016-07-11 17:42:34 +02:00
|
|
|
\end{document}
|
|
|
|
|