Now redacting event structures' presentation

This commit is contained in:
Théophile Bastian 2016-08-10 16:57:58 +01:00
parent 15de66b6a5
commit 508ffd0351
2 changed files with 282 additions and 162 deletions

View File

@ -22,6 +22,7 @@
\newcommand{\edgeArrow}{\rightarrowtriangle}
\newcommand{\linarrow}{\rightspoon}
\newcommand{\redarrow}[1]{\overset{#1}{\longrightarrow}}
\newcommand{\config}{\mathscr{C}}
@ -38,6 +39,7 @@
\newcommand{\lcalc}{$\lambda$-calculus}
\newcommand{\lccs}{$\lambda$CCS}
\newcommand{\linccs}{CCS$_\linarrow$}
\newcommand{\llccs}{$\lambda$CCS$_\linarrow$}
\newcommand{\proc}{\mathbb{P}}
\newcommand{\chan}{\mathbb{C}}

View File

@ -20,6 +20,7 @@
\newcommand{\qtodo}[1]{\colorbox{orange}{\textcolor{blue}{#1}}}
\newcommand{\todo}[1]{\colorbox{orange}{\qtodo{\textbf{TODO:} #1}}}
\newcommand{\qnote}[1]{\colorbox{Cerulean}{\textcolor{Sepia}{[#1]}}}
\author{Théophile \textsc{Bastian}, supervised by Glynn \textsc{Winskel}
and Pierre \textsc{Clairambault} \\
@ -34,6 +35,7 @@
\tableofcontents
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\paragraph{Game semantics} are a kind of denotational semantics in which a
@ -109,30 +111,192 @@ in~\cite{rideau2011concurrent} by S. \fname{Rideau} and G. \fname{Winskel}.
Roughly, event structures represent a strategy as a \emph{partial order} on the
moves, stating that move $x$ can only be played after move $y$, which is more
flexible than tree-like game approaches. Although a full-abstraction result
could not be reached --- but is not so far away ---, we have proved the
\emph{adequacy} of the operational and denotational semantics, and have
obtained an implementation of the (denotational) game semantics, that is, code
that translates a term of the language into its corresponding strategy.
could not be reached --- but is not so far away ---, we~\qnote{or I?} have
proved the \emph{adequacy} of the operational and denotational semantics, and
have obtained an implementation of the (denotational) game semantics, that is,
code that translates a term of the language into its corresponding strategy.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{A linear \lcalc{} with concurrency primitives: \llccs}
\subsection{A linear variant of CCS}
The language on which my internship was focused was meant to be simple, easy to
parse and easy to work on both in theory and on the implementation. It should
of course include concurrency primitives. For these reasons, we chose to
consider a variant of CCS~\cite{milner1980ccs} --- a simple standard language
including parallel and sequential execution primitives, as well as
synchronization of processes through \emph{channels} ---, lifted up to the
higher order through a \lcalc. The language was then restricted to a
\emph{linear} one --- that is, each identifier declared must be referred to
exactly once ---, partly to keep the model simple, partly to meet the
determinism requirements.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{An affine variant of CCS}
The variant of CCS we chose to use has two base types:
\emph{processes}~($\proc$) and \emph{channels}~($\chan$). It has two base
processes: $0$ (failure) and $1$ (success), although a process can be
considered ``failed'' without reducing to $1$.
Although the \lcalc{} into which we will plug it is linear, it appeared
unavoidable to use an \emph{affine} CCS --- that is, each identifier may be
used \emph{at most} once instead of \emph{exactly} once --- to ease the
inductions over CCS terms.
\begin{figure}[h]
\begin{minipage}[t]{0.60\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots ::=~&1 & \text{(success)}\\
\vert~&0 & \text{(error)}\\
\vert~&t \parallel u & \text{(parallel)}\\
\vert~&t \cdot u & \text{(sequential)}\\
\vert~&(\nu a) t & \text{(new channel)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots ::=~&\proc & \text{(process)} \\
\vert~&\chan & \text{(channel)}
\end{align*}
\end{minipage}
\caption{CCS terms and types}\label{fig:lccs:def}
\end{figure}
The syntax is pretty straightforward to understand: $0$ and $1$ are base
processes; $\parallel$ executes in parallel its two operands; $\cdot$ executes
sequentially its two operands (or synchronizes on a channel if its left-hand
operand is a channel); $(\nu a)$ creates a new channel $a$ on which two
processes can be synchronized. Here, the ``synchronization'' simply means that
a call to the channel is blocking until its other end has been called as well.
The language is simply typed as in figure~\ref{fig:lccs:typing}, in which
$\Phi$ is an environment composed \emph{only} of channels, reflecting the
affine nature of the language. Note that binary operators split their
environment between their two operands, ensuring that each identifier is used
only once.
\begin{figure}[h]
\begin{align*}
\frac{~}{\Phi \vdash 0:\proc} & (\textit{Ax}_0) &
\frac{~}{\Phi \vdash 1:\proc} & (\textit{Ax}_1) &
\frac{~}{\Phi, t:A \vdash t:A} & (\textit{Ax}) &
\frac{\Gamma, a:\chan, \bar{a}:\chan \vdash P : \proc}
{\Gamma \vdash (\nu a) P : \proc}
& (\nu)
\end{align*}\vspace{-1.5em}\begin{align*}
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \parallel Q : \proc}
& (\parallel) &
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \cdot Q : \proc}
& (\cdot_\proc) &
\frac{\Gamma \vdash P : \proc}
{\Gamma,a:\chan \vdash a \cdot P: \proc}
& (\cdot_\chan)
\end{align*} \vspace{-1.5em}
\caption{CCS typing rules}\label{fig:lccs:typing}
\end{figure}
We also equip this language with operational semantics, in the form of a
labeled transition system (LTS), as described in figure~\ref{fig:lccs:opsem},
where $a$ denotes a channel and $x$ denotes any possible label.
\begin{figure}[h]
\begin{align*}
\frac{~}{a \cdot P \redarrow{a} P} & &
\frac{~}{1 \parallel P \redarrow{\tau} P} & &
\frac{~}{1 \cdot P \redarrow{\tau} P} & &
\frac{~}{(\nu a) 1 \redarrow{\tau} 1} & &
\frac{P \redarrow{a} P'\quad Q \redarrow{\bar{a}} Q'}
{P \parallel Q \redarrow{\tau} P' \parallel Q'}
\end{align*}\begin{align*}
\frac{P \redarrow{x} P'}
{P \parallel Q \redarrow{x} P' \parallel Q} & &
\frac{Q \redarrow{x} Q'}
{P \parallel Q \redarrow{x} P \parallel Q'} & &
\frac{P \redarrow{x} P'}
{P \cdot Q \redarrow{x} P' \cdot Q} & &
\frac{P \redarrow{x} P'}
{(\nu a)P \redarrow{x} (\nu a)P'} & (a \neq x)
\end{align*}
\caption{CCS operational semantics}\label{fig:lccs:opsem}
\end{figure}
We consider that a term $P$ \emph{converges} whenever $P \redarrow{\tau}^\ast
1$, and we write $P \Downarrow$.
\subsection{Lifting to the higher order: linear \lcalc}
\section{Existing work}
In order to reach the studied language, \llccs, we have to lift up \lccs{} to a
\lcalc. To do so, we add to the language the constructions of
figure~\ref{fig:llam:syntax}, which are basically the usual \lcalc{}
constructions slightly transformed to be linear (which is mostly reflected by
the typing rules). In particular, there is no base type $\alpha$: the base
types are only $\proc$ and $\chan$.
My work is set in the context of a wider theory, the basics of which are
necessary to understand properly what follows. It is the purpose of this
section to bring light upon this theory.
\begin{figure}[h]
\begin{minipage}[t]{0.55\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots ::=~&x \in \mathbb{V} & \text{(variable)}\\
\vert~&t~u & \text{(application)}\\
\vert~&\lambda x^A \cdot t & \text{(abstraction)}\\
\vert~&t \tens u & \text{(tensor)} \\
\vert~&\text{let }x,y=z\text{~in }t & \text{(tensor elimination)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.40\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots ::=~&A \linarrow B & \text{(linear arrow)}\\
\vert~&A \Tens B & \text{(tensor)}
\end{align*}
\end{minipage}
\caption{Linear \lcalc{} terms and types}\label{fig:llam:syntax}
\end{figure}
The general work of the team I was working in could be described as
``concurrent games as event structures'', that is, using the \textit{event
structures} formalism to describe concurrent games, instead of the more
traditional approach of \emph{tree-like games} (``Player plays $A$, then
Opponent plays $B$, thus reaching the configuration $A \cdot B$'').
To enforce the linearity, the only the typing rules of the usual \lcalc{} that
have to be changed are those of figure~\ref{fig:llam:typing}. The usual rules
for $\lambda$-abstraction and $\otimes$-elimination applies.
\subsection{Informal approach}
\begin{figure}[h]
\begin{align*}
\frac{~}{x : A \vdash x : A} & (\textit{Ax}) &
\frac{\Gamma \vdash t : A \linarrow B \quad \Delta \vdash u : A}
{\Gamma,\Delta \vdash t~u : B} & (\textit{App}) &
\frac{\Gamma \vdash t : A \quad \Delta \vdash u : B}
{\Gamma, \Delta \vdash t \tens u : A \Tens B} & (\tens_I)
\end{align*}
\caption{Linear \lcalc{} typing rules (eluding the usual
ones)}\label{fig:llam:typing}
\end{figure}
The linearity is here guaranteed: in the (\textit{Ax}) rule, the environment
must be $x:A$ instead of the usual $\Gamma, x:A$, ensuring that each variable
is used \emph{at least once}; while the environment split in the binary
operators' rules ensures that each variable is used \emph{at most once}
(implicitly, $\Gamma \cap \Delta = \emptyset$).
To lift the operational semantics to \llccs, we only need to add one rule:
\[
\frac{P \longrightarrow_\beta P'}{P \redarrow{\tau} P'}
\]
%%%%%%%%%%%%%%%%%%%%%
\subsection{Examples}
\todo{Examples}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{A games model}
Our goal is now to give a games model for the above language. For that purpose,
we will use \emph{event structures}, providing an alternative formalism to
the often-used tree-like games.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The event structures framework}
The traditional approach to concurrent games is to represent them as
\emph{tree-like games}. If the considered game consists in three moves, namely
@ -155,8 +319,8 @@ game tree
This can of course be used to describe much larger games, and is often useful
to reason concurrently, since the causal histories appear clearly: the possible
states of the game can be read easily by concatenating the events that are on a
same branch.
states of the game can be read easily by concatenating the events found along
a path from the root of the tree.
But it also has the major drawback of growing exponentially in size: let us
consider a game in which Opponent must play $A$ and $B$ in no particular order
@ -178,15 +342,20 @@ before Player can play $C$. The corresponding tree-like game would be
\end{tikzpicture}
\]
This problem motivated the (still marginal) introduction of \emph{event
structures} as a formalism to describe such games. Informally, an event
structure is a partial order $\leq$ on \emph{events} (here, the game's moves),
alongside with a \emph{consistency} relation.
This problem motivated the introduction of \emph{event structures} as a
formalism to describe such games~\cite{rideau2011concurrent}. Informally, an
event structure is a partial order $\leq$ on \emph{events} (here, the game's
moves), alongside with a \emph{consistency} relation.
The relation $e_1 \leq e_2$ means that $e_1$ must have been played before $e_2$
can be played, while the consistency relation states which events can occur
together in a same game. For instance, the previous game would have all its
events consistent with one another and its Hasse diagram would look like
The purpose of the consistency relation is to describe non-determinism, in
which we are not interested here, since we seek a deterministic model: in all
the following constructions, I will omit the consistency set. The original
constructions including it can be found for instance
in~\cite{castellan2016concurrent,winskel1986event}.
The partial order $e_1 \leq e_2$ means that $e_1$ must have been played before
$e_2$ can be played. For instance, the Hasse diagram of the previous game would
look like
\[
\begin{tikzpicture}
\node (1) {A};
@ -198,13 +367,12 @@ events consistent with one another and its Hasse diagram would look like
\end{tikzpicture}
\]
\subsection{Event structures}
%%%%%
\subsubsection{Event structures}
\begin{definition}[event structure]
An \emph{event structure}~\cite{winskel1986event} is a pair
$(E, \leq_E, \con_E)$, where $E$ is a
set of \emph{events}, $\leq_E$ is a partial order on $E$ and
$\con_E \subseteq \powerset_F(E)$ is a set of \emph{consistent events}.
An \emph{event structure}~\cite{winskel1986event} is a poset $(E, \leq_E)$,
where $E$ is a set of \emph{events} and $\leq_E$ is a partial order on $E$.
The partial order $\leq_E$ naturally induces a binary relation $\edgeArrow$
over $E$ that is defined as the transitive reduction of $\leq_E$.
@ -212,21 +380,11 @@ events consistent with one another and its Hasse diagram would look like
In this context, the right intuition of event structures is a set of events
that can occur, the players' moves, alongside with a partial order stating that
a given move cannot occur before another move, and a consistency relation
indicating whether a given set of moves can occur in the same instance of the
game.
The consistency relation is often replaced by a weaker \emph{conflict} binary
relation $\confl$ indicating that two events cannot occur together.
During this internship, my work was essentially carried on event structures
without conflicts. Thus, the consistency set is not relevant and will be
omitted in what follows, but one can refer to~\cite{castellan2016concurrent}
for the corresponding constructions with consistency sets.
a given move cannot occur before another move.
Event structures are often represented as a directed acyclic graph (DAG)
where the vertices are the elements of $E$ and the edges are the transitive
reduction of $\leq_E$; onto which the conflict relation is superimposed.
reduction of $\leq_E$.
\begin{definition}[event structure with polarities]
An \emph{event structure with polarities} (\textit{ESP}) is an event
@ -235,12 +393,29 @@ reduction of $\leq_E$; onto which the conflict relation is superimposed.
\end{definition}
In order to model games, this is used to represent whether a move is to be
played by Player or Opponent.
played by Player or Opponent. To represent polarities, we will often use colors
instead of $+$ and $-$ signs: a red-circled event will have a negative
polarity, \ie{} will be played by Opponent, while a green-circled one will have
a positive polarity.
The ESP of the previous example would then be
\[
\begin{tikzpicture}
\node (1) [draw=red,ellipse] {A};
\node (3) [draw=green,ellipse,right of=1] {C};
\node (2) [draw=red,ellipse,right of=3] {B};
\path[->]
(1) edge (3)
(2) edge (3);
\end{tikzpicture}
\]
\begin{definition}[configuration]
A \emph{configuration} of an ESP $A$ is a subset $X \subseteq A$
that is \emph{down-closed}, \ie{} $\forall x \in X, \forall e \in E_A,
e \leq_A x \implies e \in X$.
A \emph{configuration} of an ESP $A$ is a finite subset $X \subseteq A$
that is \emph{down-closed}, \ie{}
\vspace{-0.5em}
\[ {\forall x \in X}, {\forall e \in A}, {e \leq_A x} \implies {e \in X}.\]
$\config(A)$ is the set of configurations of $A$.
\end{definition}
@ -249,13 +424,14 @@ A configuration can thus be seen as a valid state of the game. $\config(A)$
plays a major role in definitions and proofs on games and strategies.
\begin{notation}
For $x,y \in \config(A)$, $x \forkover{e} y$ states that $y = x \cup
For $x,y \in \config(A)$, $x \forkover{e} y$ states that $y = x \sqcup
\set{e}$ (and that both are valid configurations). It is also possible to
write $x \forkover{e}$, stating that $x \cup \set{e} \in \config(A)$, or $x
\fork y$.
write $x \forkover{e}$, stating that $x \sqcup \set{e} \in \config(A)$, or
$x \fork y$.
\end{notation}
\subsection{Concurrent games}
%%%%%
\subsubsection{Concurrent games}
\begin{definition}[game]
A \emph{game} $A$ is an event structure with polarities. \\
@ -268,9 +444,6 @@ machine: Player is the coffee machine, while Opponent is a user coming to buy a
drink.
\begin{example}[Coffee machine]
In this example (and all the following), a red-circled node has a negative
polarity, while a green-circled one has a positive polarity.
Here, the game has only events, but no edges: nothing in the rules of the
game constrains the program to behave in a certain way, only its
\emph{strategy} will do that.
@ -280,24 +453,26 @@ drink.
\includedot[scale=0.9]{coffeemachine.game}
\captionof{figure}{Coffee machine game}
\medskip
The user can insert a coin, ask for a coffee or ask for a tea. The coffee
machine can deliver a coffee or deliver a tea.
\end{example}
\begin{definition}[pre-strategy]
A \emph{pre-strategy} $\sigma: S \to A$ is a total map of ESPs, where
$A$ is the game on which the strategy plays, such that
A \emph{pre-strategy} on the game $A$, $\sigma: A$, is an ESP such that
\begin{enumerate}[(i)]
\item $\forall x \in \config(S), \sigma(x) \in \config(A)$;
\item \textit{(local injectivity)} $\forall s,s' \in \config(S),
\sigma(s) = \sigma(s') \implies s = s'$;
\item $\forall s \in S, \rho_A(\sigma(s)) = \rho_S(s)$
\item $\sigma \subseteq A$;
\item $\config(\sigma) \subseteq \config(A)$;
% \item \textit{(local injectivity)} $\forall s,s' \in \config(S),
% \sigma(s) = \sigma(s') \implies s = s'$;
\item $\forall s \in \sigma, \rho_A(s) = \rho_\sigma(s)$
\item \qnote{Local injectivity: useless without conflicts?}
\end{enumerate}
\end{definition}
\begin{example}[Coffee machine, cont.]
Let's now define a possible \emph{pre-strategy} for our coffee machine
example.
A possible \emph{pre-strategy} for our coffee machine example would be
\smallskip
@ -310,31 +485,53 @@ drink.
user to both put a coin and press ``coffee'' before delivering a coffee,
and same goes for tea. Though, what happens if the user inserts a coin and
presses \emph{both} buttons at the same time? Here, the coffee machine can
dispense both drinks. This behavior is surely unwanted: one should add a
conflict relation between coffee and tea, to ensure that only one of the
two drinks can be dispensed. \end{example}
dispense both drinks. This behavior is surely unwanted, but cannot be
easily fixed using this representation of the coffee machine --- this would
have been the role of the \emph{consistency set} we briefly mentioned.
\end{example}
But as it is defined, a pre-strategy does not exactly capture what we expect of
a \emph{strategy}: it is too expressive. For instance, a pre-strategy on the
coffee machine game would be allowed to have the (absurd) relation
$\text{coffee} \leq \text{getTea}$, stating that the user cannot press the
``tea'' button before the machine has delivered a coffee. This is unrealistic:
the coffee machine has no mean to enforce this. We then have to restrict
pre-strategies to \emph{strategies}:
\begin{definition}[strategy]
A \emph{strategy} is a pre-strategy $\sigma : S \to A$ that
A \emph{strategy} is a pre-strategy $\sigma : A$ that
``behaves well'', \ie{} that is
\begin{enumerate}[(i)]
\item\label{def:receptive}
\textit{receptive}: for all $x \in \config(A)$ \st{}
$\sigma(x) \forkover{e^-}$, $\exists! s \in S : \sigma(s) = a$;
$x \forkover{e^-}$, $e \in \sigma$; \qnote{equivalent to
$\rho^{-1}(\ominus) \subseteq \sigma$ in a deterministic context?}
\item\label{def:courteous}
\textit{courteous}: $\forall x \edgeArrow x' \in S,
(\rho(x),\rho(x')) \neq (-,+) \implies
\sigma(x) \edgeArrow \sigma(x')$.
\textit{courteous}: $\forall x \edgeArrow_\sigma x' \in \sigma$,
$(\rho(x),\rho(x')) \neq (-,+) \implies
x \edgeArrow_A x'$.
\end{enumerate}
\end{definition}
(\ref{def:receptive}) captures the idea that we should not force Opponent not to
play one of its moves, while~(\ref{def:courteous}) states that unless a
dependency relation is imposed by the games' rules, one can only make one of
its moves depend on an Opponent move.
(\ref{def:receptive}) captures the idea that we should not force Opponent not
to play one of its moves. Indeed, not including an event in a strategy means
that this event \emph{will not} be played. It is unreasonable to consider that
a strategy could forbid Opponent to play a given move.
\subsection{Operations on games and strategies}
(\ref{def:courteous}) states that unless a dependency relation is imposed by
the games' rules, one can only make one of its moves depend on an Opponent
move, \ie{} every direct arrow in the partial order that is not inherited from
the game should be ${\ominus \edgeArrow \oplus}$. Clearly, it is unreasonable
to consider an arrow ${\ostar \edgeArrow \ominus}$, which would mean forcing
Opponent to wait for a move (either from Player or Opponent) before player her
move; but ${\oplus \edgeArrow \oplus}$ is also unreasonable, since we're
working in a concurrent context. Intuitively, one could think that when playing
$x$ then $y$, it is undefined whether Opponent will receive $x$ then $y$ or $y$
then $x$.
%%%%%
\subsubsection{Operations on games and strategies}
\todo{intro}
@ -352,8 +549,14 @@ $\sigma: S \to A$ and $\tau: T \to B$ denotes strategies.
polarities) and to strategies.
\end{definition}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Interpretation of \llccs}
%%%%%%%%%%%%%%%%%%%%%
\subsection{Adequacy}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Implementation of deterministic concurrent games}
\hfill\href{https://github.com/tobast/cam-strategies/}
@ -448,7 +651,7 @@ $\edgeArrow$ relation (that is the transitive reduction of $\leq$), which means
computing the transitive closure of the interaction, hiding the central part
and then computing the transitive reduction of the DAG\@.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear lambda-calculus}
Concurrent games can be used as a model of lambda-calculus. To keep the
@ -462,43 +665,11 @@ each variable in the environment can and must be used exactly once.
The linear lambda calculus we use has the same syntax as the usual simply-typed
lambda calculus with type annotations and tensor product:
\begin{minipage}[t]{0.60\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots :\eqdef~&x \in \mathbb{V} & \text{(variable)}\\
\vert~&t~u & \text{(application)}\\
\vert~&\lambda x^A \cdot t & \text{(abstraction)}\\
\vert~&t \tens u & \text{(tensor)} \\
\vert~&\text{let }x,y=z\text{~in }t & \text{(tensor elimination)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots :\eqdef~&\alpha & \text{(base type)} \\
\vert~&A \linarrow B & \text{(linear arrow)}\\
\vert~&A \Tens B & \text{(tensor)}
\end{align*}
\end{minipage}
\medskip
Only the following typing rules differ from the usual rules and are worth
noting:
\begin{minipage}{0.4\textwidth} \begin{equation}
\tag{\textit{Ax}}
\frac{~}{x : A \vdash x : A}
\label{typ:llam:ax}
\end{equation} \end{minipage} \hfill
\begin{minipage}{0.5\textwidth} \begin{align}
\tag{\textit{App}}
\frac{\Gamma \vdash t : A \linarrow B \quad
\Delta \vdash u : A}
{\Gamma,\Delta \vdash t~u : B}
\label{typ:llam:app}
\end{align} \end{minipage}
\medskip
Note that in~(\ref{typ:llam:ax}), the left part is $x : A$ and not (as usual)
$\Gamma, x:A$. This ensures that each defined variable present in the
@ -549,62 +720,9 @@ environment is $\Gamma,\Delta$, while doing so in the actual implementation
turns out to be a nightmare: it is better to keep the environment ordered by
the variables introduction order, thus intertwining $\Gamma$ and $\Delta$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear \lccs}
After working on linear lambda calculus, my work shifted to ``linear \lccs'',
that is, a variant of CCS (Calculus of Communicating
Systems,~\cite{milner1980ccs}) integrated into the linear lambda calculus
described above.
CCS is used as a basic model of multi-threaded systems: its atoms are
\emph{processes}. Those processes can be put in parallel or in sequential
execution, and one can synchronize processes on channels.
\subsection{Definition}
\begin{figure}[h]
\begin{minipage}[t]{0.60\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots :\eqdef~&1 & \text{(success)}\\
\vert~&0 & \text{(error)}\\
\vert~&t \parallel u & \text{(parallel)}\\
\vert~&t \cdot u & \text{(sequential)}\\
\vert~&(\nu a) t & \text{(new channel)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots :\eqdef~&\proc & \text{(process)} \\
\vert~&\chan & \text{(channel)}
\end{align*}
\end{minipage}
\caption{CCS terms and types}\label{fig:lccs:def}
\end{figure}
\begin{figure}[h]
\begin{align*}
\frac{~}{\Phi \vdash 0:\proc} & (\textit{Ax}_0) &
\frac{~}{\Phi \vdash 1:\proc} & (\textit{Ax}_1) &
\frac{~}{\Phi, t:A \vdash t:A} & (\textit{Ax}) &
\frac{\Gamma, a:\chan, \bar{a}:\chan \vdash P : \proc}
{\Gamma \vdash (\nu a) P : \proc}
& (\nu)
\end{align*}\vspace{-1.5em}\begin{align*}
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \parallel Q : \proc}
& (\parallel) &
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \cdot Q : \proc}
& (\cdot_\proc) &
\frac{\Gamma \vdash P : \proc}
{\Gamma,a:\chan \vdash a \cdot P: \proc}
& (\cdot_\chan)
\end{align*} \vspace{-1.5em}
\caption{CCS typing rules}\label{fig:lccs:typing}
\end{figure}
\begin{figure}[h]
\begin{align*}
\seman{P \parallel Q} &\eqdef \left(