Proof-read conclusion; index Wrapping it all up? in TOC

This commit is contained in:
Théophile Bastian 2024-09-01 16:21:54 +02:00
parent b5b0296102
commit d3b99be7a1
2 changed files with 12 additions and 9 deletions

View file

@ -1,4 +1,5 @@
\chapter*{Wrapping it all up?}\label{chap:wrapping_up}
\addcontentsline{toc}{chapter}{Wrapping it all up?}
In \autoref{chap:palmed}, we introduced \palmed{}, a framework to build a
backend model. Following up in \autoref{chap:frontend}, we introduced a

View file

@ -7,7 +7,8 @@ analyzing the low-level performance of a microkernel:
\item frontend bottlenecks ---~the processor's frontend is unable to
saturate the backend with instructions (\autoref{chap:palmed});
\item backend bottlenecks ---~the backend is saturated with instructions
and processes them as fast as possible (\autoref{chap:frontend});
from the frontend and is unable to process them fast enough
(\autoref{chap:frontend});
\item dependencies bottlenecks ---~data dependencies between instructions
prevent the backend from being saturated; the latter is stalled
awaiting previous results (\autoref{chap:staticdeps}).
@ -49,8 +50,9 @@ targeted microarchitecture.
port-mapping of a processor, serving as a backend model.
\item We manually extracted a frontend model for the Cortex A72 processor.
We believe that the foundation of our methodology works on most
processors. The main characteristics of a frontend, apart from their
instructions' \uops{} decomposition and issue width, must however still
processors. To this end, we provide a parametric model that may serve
as a scaffold for future works willing to build an automatic frontend
model. Some parameters of this model must however still
be investigated, and their relative importance evaluated.
\item We provided with \staticdeps{} a method to to extract data
dependencies between instructions. It is able to detect
@ -72,9 +74,9 @@ backend model's accuracy and our dependencies model significantly improves
\uica{}'s results, while being consistent with a dynamic dependencies analysis.
Evaluating the three models combined as a complete analyzer would have been
most meaningful. However, as we argue in \autoref{chap:wrapping_up} abvoe, this
is sadly not pragmatic, as tools do not easily combine without a large amount f
engineering.
most meaningful. However, as we argue in the pre-conclusive chapter
\nameref{chap:wrapping_up}, this is sadly not pragmatic, as tools do not easily
combine without a large amount f engineering.
\bigskip{}
@ -89,13 +91,13 @@ benchmark set. While we built this benchmark set aiming for representative
data, there is no clear evidence that these dependencies are so strongly
present in the codes analyzed in real usecases. We however believe that such
cases regularly occur, and we also saw that the performance of code analyzers
drop sharply in their presence.
drops sharply in their presence.
\smallskip{}
We also found the bottleneck prediction offered by some code analyzers still
uncertain. In our experiments, the tools disagreed more often than not on the
presence or absence of a bottleneck, with no outstanding tool; we are thus
very uncertain. In our experiments, the tools disagreed more often than not on
the presence or absence of a bottleneck, with no outstanding tool; we are thus
unable to conclude on the relative performance of tools on this aspect. On the
other hand, sensitivity analysis, as implemented \eg{} by \gus{}, seems a
theoretically sound way to evaluate the presence or absence of a bottleneck in