L3-internship-report/report.tex

559 lines
21 KiB
TeX
Raw Normal View History

2016-07-11 17:42:34 +02:00
\documentclass[11pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
2016-07-12 14:49:11 +02:00
\usepackage{indentfirst}
\usepackage{enumerate}
2016-07-18 12:50:51 +02:00
\usepackage{cite}
2016-07-19 11:50:11 +02:00
\usepackage{caption}
2016-07-11 17:42:34 +02:00
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
% Custom packages
2016-07-12 14:49:11 +02:00
\usepackage{leftrule_theorems}
2016-07-11 17:42:34 +02:00
\usepackage{my_listings}
2016-07-12 14:49:11 +02:00
\usepackage{my_hyperref}
2016-07-11 17:42:34 +02:00
\usepackage{math}
\usepackage{concurgames}
2016-07-11 17:42:34 +02:00
2016-07-12 14:49:11 +02:00
\newcommand{\qtodo}[1]{\colorbox{orange}{\textcolor{blue}{#1}}}
\newcommand{\todo}[1]{\colorbox{orange}{\qtodo{\textbf{TODO:} #1}}}
2016-07-18 13:27:55 +02:00
\author{Théophile \textsc{Bastian}, supervised by Glynn \textsc{Winskel}
and Pierre \textsc{Clairambault} \\
\begin{small}Cambridge University\end{small}}
\title{Internship report\\Concurrent games as event structures}
\date{June-July 2016}
2016-07-11 17:42:34 +02:00
\begin{document}
\maketitle
2016-07-12 14:49:11 +02:00
\todo{abstract}
2016-07-18 12:50:51 +02:00
\tableofcontents
2016-07-12 14:49:11 +02:00
\section{Existing work}
My work is set in the context of a wider theory, the basics of which are
necessary to understand properly what follows. It is the purpose of this
section to bring light upon this theory.
The general work of the team I was working in could be described as
``concurrent games as event structures'', that is, using the \textit{event
structures} formalism to describe concurrent games, instead of the more
traditional approach of \emph{tree-like games} (``Player plays $A$, then
Opponent plays $B$, thus reaching the configuration $A \cdot B$'').
2016-07-20 00:26:19 +02:00
\subsection{Informal approach}
2016-07-12 14:49:11 +02:00
\subsection{Event structures}
\begin{definition}[event structure]
2016-07-18 12:50:51 +02:00
An \emph{event structure}~\cite{winskel1986event} is a pair
$(E, \leq_E, \con_E)$, where $E$ is a
2016-07-12 14:49:11 +02:00
set of \emph{events}, $\leq_E$ is a partial order on $E$ and
$\con_E \subseteq \powerset(E)$ is a set of \emph{consistent events}.
The partial order $\leq_E$ naturally induces a binary relation $\edgeArrow$
over $E$ that is defined as the transitive reduction of $\leq_E$.
\end{definition}
In this context, the right intuition of event structures is a set of events
that can occur, the players' moves, alongside with a partial order stating that
a given move cannot occur before another move, and a consistency relation
indicating whether a given set of moves can occur in the same instance of the
game.
The consistency relation is often replaced by a weaker \emph{conflict} binary
relation $\confl$ indicating that two events cannot occur together.
2016-07-20 00:26:19 +02:00
During this internship, my work was essentially carried on event structures
without conflicts. Thus, the consistency set is not relevant and will be
omitted in what follows, but one can refer to~\cite{castellan2016concurrent}
for the corresponding constructions with consistency sets.
2016-07-12 14:49:11 +02:00
Event structures are often represented as a directed acyclic graph (DAG)
where the vertices are the elements of $E$ and the edges are the transitive
reduction of $\leq_E$; onto which the conflict relation is superimposed.
\begin{definition}[event structure with polarities]
An \emph{event structure with polarities} (\textit{ESP}) is an event
2016-07-20 00:26:19 +02:00
structure $(E, \leq_E, \rho)$, where $\rho : E \to \set{+,-}$ is a
2016-07-12 14:49:11 +02:00
function associating a \emph{polarity} to each event.
\end{definition}
In games theory, this is used to represent whether a move is to be played by
Player or Opponent.
\begin{definition}[configuration]
2016-07-20 00:26:19 +02:00
A \emph{configuration} of an ESP $A$ is a subset $X \subseteq A$
2016-07-12 14:49:11 +02:00
that is \emph{down-closed}, \ie{} $\forall x \in X, \forall e \in E_A,
e \leq_A x \implies e \in X$.
$\config(A)$ is the set of configurations of $A$.
\end{definition}
2016-07-19 11:50:11 +02:00
A configuration can thus be seen as a valid state of the game. $\config(A)$
plays a major role in definitions and proofs on games and strategies.
2016-07-12 14:49:11 +02:00
\begin{notation}
For $x,y \in \config(A)$, $x \forkover{e} y$ states that $y = x \cup
\set{e}$ (and that both are valid configurations). It is also possible to
write $x \forkover{e}$, stating that $x \cup \set{e} \in \config(A)$, or $x
\fork y$.
\end{notation}
\subsection{Concurrent games}
\begin{definition}[game]
A \emph{game} $A$ is an event structure with polarities. \\
The dual game $A^\perp$ is the game $A$ where all the polarities in
$\rho$ have been reversed.
\end{definition}
2016-07-19 11:50:11 +02:00
For instance, one could imagine a game modeling the user interface of a coffee
machine: Player is the coffee machine, while Opponent is a user coming to buy a
drink.
\begin{example}[Coffee machine]
In this example (and all the following), a red-circled node has a negative
polarity, while a green-circled one has a positive polarity.
Here, the game has only event, but no edges: nothing in the rules of the
game constrains the program to behave in a certain way, only its
\emph{strategy} will do that.
\smallskip
\includedot[scale=0.9]{coffeemachine.game}
\captionof{figure}{Coffee machine game}
The user can insert a coin, ask for a coffee or ask for a tea. The coffee
machine can deliver a coffee or deliver a tea.
\end{example}
2016-07-12 14:49:11 +02:00
\begin{definition}[pre-strategy]
2016-07-19 11:50:11 +02:00
A \emph{pre-strategy} $\sigma: S \to A$ is a total map of ESPs, where
$A$ is the game on which the strategy plays, such that
2016-07-12 14:49:11 +02:00
\begin{enumerate}[(i)]
2016-07-19 11:50:11 +02:00
\item $\forall x \in \config(S), \sigma(x) \in \config(A)$;
\item \textit{(local injectivity)} $\forall s,s' \in \config(S),
\sigma(s) = \sigma(s') \implies s = s'$;
\item $\forall s \in S, \rho_A(\sigma(s)) = \rho_S(s)$
2016-07-12 14:49:11 +02:00
\end{enumerate}
\end{definition}
2016-07-19 11:50:11 +02:00
\begin{example}[Coffee machine, cont.]
Let's now define a possible \emph{pre-strategy} for our coffee machine
example.
\smallskip
\begin{centering}
\includedot{coffeemachine.strat}
\captionof{figure}{Coffee machine strategy}
\end{centering}
This pre-strategy makes sense: the coffee machine software waits for the
user to both put a coin and press ``coffee'' before delivering a coffee,
and same goes for tea. Though, what happens if the user inserts a coin and
presses \emph{both} buttons at the same time? Here, the coffee machine can
dispense both drinks. This behavior is surely unwanted: one should add a
conflict relation between coffee and tea, to ensure that only one of the
two drinks can be dispensed. \end{example}
2016-07-12 14:49:11 +02:00
\begin{definition}[strategy]
A \emph{strategy} is a pre-strategy $\sigma : S \to A$ that
``behaves well'', \ie{} that is
\begin{enumerate}[(i)]
\item\label{def:receptive}
\textit{receptive}: for all $x \in \config(A)$ \st{}
$\sigma(x) \forkover{e^-}$, $\exists! s \in S : \sigma(s) = a$;
\item\label{def:courteous}
\textit{courteous}: $\forall x \edgeArrow x' \in S,
(\rho(x),\rho(x')) \neq (-,+) \implies
\sigma(x) \edgeArrow \sigma(x')$.
\end{enumerate}
\end{definition}
(\ref{def:receptive}) captures the idea that we should not force Opponent not to
play one of its moves, while~(\ref{def:courteous}) states that unless a
dependency relation is imposed by the games' rules, one can only make one of
its moves depend on an Opponent move.
2016-07-20 00:26:19 +02:00
\subsection{Operations on games and strategies}
\todo{intro}
In this whole section, $E$ and $F$ denotes ESPs, $A$ and $B$ denotes games,
$\sigma: S \to A$ and $\tau: T \to B$ denotes strategies.
\begin{definition}[Parallel composition]
The \emph{parallel composition} $E \parallel F$ of two ESPs is an ESP
whose events are $\left(\set{0} \times E\right) \cup \left(\set{1} \times
F\right)$ (the disjoint tagged union of the events of $E$ and $F$), and
whose partial order is $\leq_E$ on $E$ and $\leq_F$ on $F$, with no
relation between elements of $E$ and $F$.
One can then naturally expand this definition to games (by preserving
polarities) and to strategies.
\end{definition}
2016-07-18 12:50:51 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Implementation of deterministic concurrent games}
2016-07-18 13:15:09 +02:00
\hfill\href{https://github.com/tobast/cam-strategies/}
{\includegraphics[height=2em]{github32.png}~\raisebox{0.5em}{Github
repository}}
2016-07-18 12:50:51 +02:00
\vspace{1em}
The first part of my internship mostly consisted --- apart from understanding
the bibliography and the underlying concepts --- in the implementation of
operations on \emph{deterministic} concurrent games, that is, concurrent games
as event structures without conflicts. The work had to be done from scratch, as
no one implemented this before.
This implementation aims to provide
\begin{enumerate}[(i)]
\item a --- more or less --- convenient way to input games/strategies;
\item basic operations over those games and strategies: parallel
composition, pullback, interaction, composition, copycat, \ldots;
\item a clean display as a Dot graph.
\end{enumerate}
\subsection{Structures}
The implementation aims to stay as close as possible to the mathematical model,
while still providing quite efficient operations.
2016-07-18 14:18:58 +02:00
As we do not handle non-determinism, an event structure can be easily
represented as a DAG in memory. The actual representation that was chosen is a
set of nodes, each containing (as well as a few other information) a list of
incoming and outgoing edges.
A \emph{game} is, in the literature, a simple ESP\@. However, to provide
interaction and composition operations, we have to somehow keep track of the
parallel compositions that were used to reach this game: if the user wants to
compose strategies on $A \strParallel B$ and $B \strParallel C$, we have to
remember that those games were indeed parallel compositions of the right games,
2016-07-18 15:02:32 +02:00
and not just a set where the events from, \eg, $A$ and $B$ are mixed. \\
This information is kept in a tree, whose leaves are the base games that were
put in parallel, and whose nodes represent a parallel composition operation.
Finally, a \emph{strategy} is consists in a game and an ESP (the strategy
itself), plus a map from the nodes of the strategy to the nodes of the game.
This structure is really close to the mathematical definition of a strategy,
and yet is only a lesser loss in efficiency.
\subsection{Operations}
The usual operations on games and strategies, namely \emph{parallel
composition}, \emph{pullback}, \emph{interaction} and \emph{composition} are
implemented in a very modular way: each operation is implemented in a functor,
whose arguments are the other operations it makes use of, each coming with its
signature. Thus, one can simply \lstocaml{open Operations.Canonical} to use the
canonical implementation, or define its own implementation, build it into an
\lstocaml{Operations} module (which has only a few lines of code) and then
open it. This is totally transparent to the user, who can use the same infix
operators.
\subsubsection{Parallel composition}
While the usual construction (\cite{castellan2016concurrent}) involves defining
the events of $A \strParallel B$ as ${\set{0} \times A} \cup {\set{1}
\times B}$, the parallel composition of two strategies is here simply
represented as the union of both event structures, while altering the
composition tree.
\subsubsection{Pullback}
Given two strategies on the same game, the pullback operation attempts to
extract a ``common part'' of those two strategies. Intuitively, the pullback of
two strategies is ``what happens'' if those two strategies play together.
The approach that was implemented (and that is used as
\lstocaml{Pullback.Canonical}) is a \emph{bottom-up} approach: iteratively, the
algorithm looks for an event that has no dependencies in both strategies, adds
it and removes the satisfied dependencies.\\
One could also imagine a \emph{top-bottom} approach, where the algorithm starts
working on the merged events of both strategies, then looks for causal loops
and removes every event involved.
\subsubsection{Interaction}
Once the previous operations are implemented, \emph{interaction} is easily
defined as in the literature (\cite{castellan2016concurrent}) and nearly is a
one-liner.
\subsubsection{Composition}
Composition is also quite easy to implement, given the previous operations. The
only difficulty is that hiding the central part means computing the new
$\edgeArrow$ relation (that is the transitive reduction of $\leq$), which means
computing the transitive closure of the interaction, hiding the central part
and then computing the transitive reduction of the DAG\@.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear lambda-calculus}
2016-07-20 15:18:33 +02:00
Concurrent games can be used as a model of lambda-calculus.
To avoid non-determinism in the strategies, and to have a somehow easier
approach, one can use concurrent games as a model of \emph{linear}
lambda-calculus, that is, a variant of the simply-typed lambda-calculus where
each variable in the environment can and must be used exactly once.
2016-07-20 15:18:33 +02:00
\subsection{Definition}
The linear lambda calculus we use has the same syntax as the usual simply-typed
lambda calculus with type annotations and tensor product:
2016-07-21 12:30:17 +02:00
\begin{minipage}[t]{0.60\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
2016-07-20 15:18:33 +02:00
\begin{align*}
2016-07-21 12:30:17 +02:00
t,u,\ldots :\eqdef~&x \in \mathbb{V} & \text{(variable)}\\
\vert~&t~u & \text{(application)}\\
\vert~&\lambda x^A \cdot t & \text{(abstraction)}\\
\vert~&t \tens u & \text{(tensor)} \\
\vert~&\text{let }x,y=z\text{~in }t & \text{(tensor elimination)}
2016-07-20 15:18:33 +02:00
\end{align*}
2016-07-21 12:30:17 +02:00
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
2016-07-20 15:18:33 +02:00
\begin{align*}
2016-07-21 12:30:17 +02:00
A,B,\ldots :\eqdef~&\alpha & \text{(base type)} \\
\vert~&A \linarrow B & \text{(linear arrow)}\\
\vert~&A \Tens B & \text{(tensor)}
2016-07-20 15:18:33 +02:00
\end{align*}
\end{minipage}
2016-07-21 12:30:17 +02:00
\medskip
Only the following typing rules differ from the usual rules and are worth
noting:
\begin{minipage}{0.4\textwidth} \begin{equation}
\tag{\textit{Ax}}
2016-07-21 12:30:17 +02:00
\frac{~}{x : A \vdash x : A}
\label{typ:llam:ax}
2016-07-21 12:30:17 +02:00
\end{equation} \end{minipage} \hfill
\begin{minipage}{0.5\textwidth} \begin{align}
\tag{\textit{App}}
2016-07-21 12:30:17 +02:00
\frac{\Gamma \vdash t : A \linarrow B \quad
\Delta \vdash u : A}
{\Gamma,\Delta \vdash t~u : B}
\label{typ:llam:app}
\end{align} \end{minipage}
2016-07-21 12:30:17 +02:00
\medskip
Note that in~(\ref{typ:llam:ax}), the left part is $x : A$ and not (as usual)
$\Gamma, x:A$. This ensures that each defined variable present in the
environment will be used. The implicit condition $\Gamma \cap \Delta =
\emptyset$ in~(\ref{typ:llam:app}) ensures that each defined variable will be
used at most once.
The terms can then be interpreted as strategies through the $\seman{\cdot}$
operator defined as in figure~\ref{fig:llam:interp}. The $\ominus$ stands for a
game whose only event is negative. The interpretation operator maps a type to a
game and a term to a strategy playing on the game associated to its type, put
in parallel with its environment's dual. For instance, if $x:A \vdash t:B$, the
strategy $\seman{t}$ will play on $\seman{A}^\perp \parallel \seman{B}$.
This explains the definition of $\seman{\lambda x^A \cdot t}$: $\seman{t}$
plays on $\seman{A}^\perp \parallel \seman{B}$, same as $\seman{\lambda x^A
\cdot t}$.
\begin{figure}
\begin{minipage}[t]{0.45\textwidth} \begin{align*}
\seman{x^A} &\eqdef \cc_{\seman{A}} \\
\seman{t^{A \linarrow B}~u^{A}} &\eqdef
\cc_{A \linarrow B} \strComp \left( \seman{t} \parallel \seman{u}
\right) \\
\seman{\lambda x^A \cdot t} &\eqdef \seman{t} \\
\seman{t \tens u} &\eqdef \seman{t} \parallel \seman{u}
\end{align*} \end{minipage} \hfill
\begin{minipage}[t]{0.45\textwidth} \begin{align*}
\seman{\alpha} &\eqdef \ominus \\
\seman{A \linarrow B} &\eqdef \seman{A}^\perp \parallel \seman{B} \\
\seman{A \Tens B} &\eqdef \seman{A} \parallel \seman{B}
\end{align*}\end{minipage}
\caption{Interpretation of linear lambda calculus}\label{fig:llam:interp}
\end{figure}
\subsection{Implementation}
The implementation, which was supposed to be fairly simple, turned out to be
2016-07-21 12:30:17 +02:00
not as straightforward as expected due to technical details: while, in the
theory, the parallel composition is obviously associative and commutative (up
to isomorphism), and thus used as such when dealing with environment and typing
rules, things get a bit harder in practice when one is supposed to provide the
exact strategy.
For instance, the above rule~(\ref{typ:llam:app}) states that the resulting
environment is $\Gamma,\Delta$, while doing so in the actual implementation
(that is, simply considering $\seman{\Gamma} \strParallel \seman{\Delta}$)
turns out to be a nightmare: it is better to keep the environment ordered by
the variables introduction order, thus intertwining $\Gamma$ and $\Delta$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear CCS}
2016-07-21 12:30:17 +02:00
After working on linear lambda calculus, my work shifted to ``linear CCS'',
that is, CCS (Calculus of Communicating Systems,~\cite{milner1980ccs})
integrated into the linear lambda calculus described above.
CCS is used as a basic model of multi-threaded systems: its atoms are
\emph{processes}. Those processes can be put in parallel or in sequential
execution, and one can synchronize processes on channels.
\subsection{Definition}
\begin{figure}[h]
\begin{minipage}[t]{0.60\textwidth}
\begin{center}Terms\end{center}\vspace{-1em}
\begin{align*}
t,u,\ldots :\eqdef~&1 & \text{(success)}\\
\vert~&0 & \text{(error)}\\
\vert~&t \parallel u & \text{(parallel)}\\
\vert~&t \cdot u & \text{(sequential)}\\
\vert~&(\nu a) t & \text{(new channel)}
\end{align*}
\end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth}
\begin{center}Types\end{center}\vspace{-1em}
\begin{align*}
A,B,\ldots :\eqdef~&\proc & \text{(process)} \\
\vert~&\chan & \text{(channel)}
\end{align*}
\end{minipage}
\caption{CCS terms and types}\label{fig:lccs:def}
\end{figure}
\begin{figure}[h]
\begin{align*}
\frac{~}{\Phi \vdash 0:\proc} & (\textit{Ax}_0) &
\frac{~}{\Phi \vdash 1:\proc} & (\textit{Ax}_1) &
\frac{~}{\Phi, t:A \vdash t:A} & (\textit{Ax}) &
\frac{\Gamma, a:\chan, \bar{a}:\chan \vdash P : \proc}
{\Gamma \vdash (\nu a) P : \proc}
& (\nu)
\end{align*}\vspace{-1.5em}\begin{align*}
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \parallel Q : \proc}
& (\parallel) &
\frac{\Gamma \vdash P : \proc \quad \Delta \vdash Q : \proc}
{\Gamma,\Delta \vdash P \cdot Q : \proc}
& (\cdot_\proc) &
\frac{\Gamma \vdash P : \proc}
{\Gamma,a:\chan \vdash a \cdot P: \proc}
& (\cdot_\chan)
\end{align*} \vspace{-1.5em}
\caption{CCS typing rules}\label{fig:lccs:typing}
\end{figure}
\begin{figure}[h]
\begin{align*}
2016-07-21 13:30:56 +02:00
\seman{P \parallel Q} &\eqdef \left(
\begin{tikzpicture}[baseline, scale=0.8]
2016-07-21 13:42:17 +02:00
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$};
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$};
2016-07-21 13:30:56 +02:00
\node (0) at (5,0.65) [draw=red,ellipse] {call};
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
\path[->]
(0) edge (1)
edge [bend right] (2)
edge [bend right] (4)
(2) edge (3)
(4) edge (5)
(3) edge [bend right] (1)
(5) edge [bend right] (1);
\end{tikzpicture}
\right) \strComp \left(\seman{P} \parallel \seman{Q}\right) &
2016-07-21 12:30:17 +02:00
\seman{\proc} = \seman{\chan} &\eqdef \begin{tikzpicture}[baseline]
\node (1) at (0,0.5) [draw=red,ellipse] {call};
\node (2) at (0,-0.5) [draw=green,ellipse] {done};
\draw [->] (1) -- (2);
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
\seman{P \cdot Q} &\eqdef \left(
\begin{tikzpicture}[baseline,scale=0.8]
2016-07-21 13:42:17 +02:00
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $Q$};
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $Q$};
2016-07-21 13:30:56 +02:00
\node (0) at (5,0.65) [draw=red,ellipse] {call};
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
\path[->]
(0) edge (1)
edge [bend right] (4)
(2) edge (3)
(4) edge (5)
(3) edge [bend right] (1)
(5) edge (2);
2016-07-21 12:30:17 +02:00
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\right) \strComp \left(\seman{P} \parallel \seman{Q}\right) &
2016-07-21 12:30:17 +02:00
\seman{1} &\eqdef \begin{tikzpicture}[baseline]
\node (1) at (0,0.5) [draw=red,ellipse] {call};
\node (2) at (0,-0.5) [draw=green,ellipse] {done};
\draw [->] (1) -- (2);
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
\seman{(a : \chan) \cdot P} &\eqdef \left(
\begin{tikzpicture}[baseline,scale=0.8]
2016-07-21 13:42:17 +02:00
\node (4) at (0,0.65) [draw=green,ellipse] {call $P$};
\node (5) at (0,-0.65) [draw=red,ellipse] {done $P$};
\node (2) at (2.5,0.65) [draw=green,ellipse] {call $a$};
\node (3) at (2.5,-0.65) [draw=red,ellipse] {done $a$};
2016-07-21 13:30:56 +02:00
\node (0) at (5,0.65) [draw=red,ellipse] {call};
\node (1) at (5,-0.65) [draw=green,ellipse] {done};
2016-07-21 12:30:17 +02:00
\path[->]
(0) edge (1)
edge [bend right] (2)
(2) edge (3)
(4) edge (5)
2016-07-21 13:30:56 +02:00
(3) edge (4)
2016-07-21 12:30:17 +02:00
(5) edge [bend right] (1);
\end{tikzpicture}
2016-07-21 13:30:56 +02:00
\right) \strComp \left(\seman{P} \parallel \seman{a}\right) &
\seman{0} &\eqdef \begin{tikzpicture}[baseline]
\node (1) at (0,0.2) [draw=red,ellipse] {call};
\end{tikzpicture}
2016-07-21 13:42:17 +02:00
\\ %%%%%%%%%%%%%%%%%%%%%%%%%
\seman{(\nu a) P} &\eqdef \left(
\begin{tikzpicture}[baseline,scale=0.8]
\node (6) at (0,0.65) [draw=green,ellipse] {call $a$};
\node (7) at (0,-0.65) [draw=red,ellipse] {done $a$};
\node (4) at (2.5,0.65) [draw=green,ellipse] {call $\bar{a}$};
\node (5) at (2.5,-0.65) [draw=red,ellipse] {done $\bar{a}$};
\node (2) at (5,0.65) [draw=green,ellipse] {call $P$};
\node (3) at (5,-0.65) [draw=red,ellipse] {done $P$};
\node (0) at (7.5,0.65) [draw=red,ellipse] {call};
\node (1) at (7.5,-0.65) [draw=green,ellipse] {done};
\path[->]
(0) edge (1)
edge [bend right] (2)
(2) edge (3)
(3) edge [bend right] (1)
(4) edge (5)
edge (7)
(6) edge (7)
edge (5);
\end{tikzpicture}
\right) \strComp \seman{P} &
2016-07-21 12:30:17 +02:00
\end{align*}
\caption{CCS interpretation as strategies}\label{fig:lccs:interp}
\end{figure}
2016-07-18 12:50:51 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliography{biblio}
\bibliographystyle{alpha}
2016-07-12 14:49:11 +02:00
2016-07-11 17:42:34 +02:00
\end{document}