From 6e9cec0c325f0ae535238cd856e2a17cd943bf24 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Th=C3=A9ophile=20Bastian?= Date: Mon, 28 Aug 2017 20:10:06 +0200 Subject: [PATCH] Fix few typos --- report/report.tex | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/report/report.tex b/report/report.tex index f3fffbe..c3f378c 100644 --- a/report/report.tex +++ b/report/report.tex @@ -86,7 +86,7 @@ To avoid such disasters, the industry nowadays uses a wide range of techniques to catch bugs as early as possible --- which, hopefully, is before the product's release date. These techniques include of course a lot of testing on simulated hardware or FPGAs (since an actual processor is extremely expensive -to actually print). A lot of testing is run as a routine on the current version +to burn). A lot of testing is run as a routine on the current version of the hardware, to catch and notify the designers, since it remains the easiest way to test the behaviour of a circuit. Symbolic trajectory evaluation~\cite{hazelhurst1997symbolic} has also its place in the domain, @@ -97,18 +97,19 @@ since its results are more precise; yet it is also too expensive to run on a significantly long number of cycles, and therefore a lot of bugs are impossible to catch this way. -The previous methods are a good cheap method to test a circuit, but give only -little confidence in its correction --- it only proves that among all the cases -that were tested, all yielded a correct behaviour. These reasons led to the -development of proved hardware in the industry. On circuits as complex as -processors, usually, only sub-components are proved correct with respect to a -given specification of its behaviour (usually source code that should behave as -the processor is expected to behave, itself with respect to the written -documentation draft of the circuit). These proofs are typically valid only -while the circuit is kept in a specified context, \ie{} a set of valid inputs -and outputs, etc. --- that should, but is not proved to, be respected by the -other parts of the circuit. Yet, this trade-off between proved correctness and -engineer's work time already gives a pretty good confidence in the circuit. +The previous methods are great cheap strategies to run the first tests on a +circuit, but give only little confidence in its correction --- it only proves +that among all the cases that were tested, all yielded a correct behaviour. +These reasons led to the development of proved hardware in the industry. On +circuits as complex as processors, usually, only sub-components are proved +correct with respect to a given specification of its behaviour (usually source +code that should behave as the processor is expected to behave, itself with +respect to the written documentation draft of the circuit). These proofs are +typically valid only while the circuit is kept in a specified context, \ie{} a +set of valid inputs, tensions, etc. --- that should, but is not proved to, be +respected by the other parts of the circuit. Yet, this trade-off between proved +correctness and engineer's work time already gives a pretty good confidence in +the circuit. In this context, Carl Seger was one of the main developers of fl at Intel~\cite{seger1993vos}~\cite{seger2005industrially}~\cite{seger2006design},