Add Wrapping it all up? pre-conclusion small chapter
This commit is contained in:
parent
461fb3786f
commit
a2229adeea
3 changed files with 63 additions and 20 deletions
58
manuscrit/90_wrapping_up/main.tex
Normal file
58
manuscrit/90_wrapping_up/main.tex
Normal file
|
@ -0,0 +1,58 @@
|
|||
\chapter*{Wrapping it all up?}\label{chap:wrapping_up}
|
||||
|
||||
In \autoref{chap:palmed}, we introduced \palmed{}, a framework to build a
|
||||
backend model. Following up in \autoref{chap:frontend}, we introduced a
|
||||
frontend model for the ARM-based Cortex A72 processor. Then, in
|
||||
\autoref{chap:staticdeps}, we further introduced a dependency detection model.
|
||||
|
||||
Put together, these three parts cover the major bottlenecks that a code
|
||||
analyzer must take into account. It would thus be satisfying to conclude this
|
||||
manuscript with a unified tool packing the three together, and evaluating the
|
||||
full extent of this PhD against the state of the art.
|
||||
|
||||
\medskip{}
|
||||
|
||||
This is, sadly, not reasonably feasible without a good amount of engineering
|
||||
effort and duct-tape code for multiple reasons.
|
||||
|
||||
\smallskip{}
|
||||
|
||||
First, the choice of the Cortex A72 as an architecture means relying on a
|
||||
low-power processor and on the Raspberry Pi microcomputer for benchmarking
|
||||
---~which has a low amount of RAM, a low-throughput storage and is prone to
|
||||
overheating. This makes extensive benchmarking impractical.
|
||||
|
||||
There is also the heterogeneity of formats used to describe benchmarks. Some
|
||||
tools, such as \staticdeps{}, rely on assembled objects and directly deal with
|
||||
assembly. Some others use custom formats to describe assembly instructions and
|
||||
their numerous declinations ---~for instance, the performance of \lstxasm{mov
|
||||
\%rax, \%rbx} will be far from that of \lstxasm{mov 8(\%rax, \%r10), \%rbx},
|
||||
despite being both a \texttt{mov} instruction. For one, \palmed{} uses such a
|
||||
format, defined by \pipedream{} (its benchmarking backend), while \uopsinfo{}
|
||||
uses a different one. Overall, this makes a challenge of using multiple tools
|
||||
on the same code, and an even greater challenge of comparing the results
|
||||
---~let alone pipelining them.
|
||||
|
||||
\bigskip{}
|
||||
|
||||
These models, however, should become really meaningful only when combined
|
||||
together ---~or, even better, when each of them could be combined with any
|
||||
other model of the other parts. To the best of our knowledge, however, no such
|
||||
modular tool exists; nor is there any standardized approach to interact with
|
||||
such models. The usual approach of the domain to try a new idea, instead, is to
|
||||
create a full analyzer implementing this idea, such as what we did with \palmed{}
|
||||
for backend models, or such as \uica{}'s implementation, focusing on frontend
|
||||
analysis.
|
||||
|
||||
In hindsight, we advocate for the emergence of such a modular code analyzer.
|
||||
It would maybe not be as convenient or well-integrated as ``production-ready''
|
||||
code analyzers, such as \llvmmca{} ---~which is packaged for Debian. It could,
|
||||
however, greatly simplify the academic process of trying a new idea on any of
|
||||
the three main models, by decorrelating them. It would also ease the
|
||||
comparative evaluation of those ideas, while eliminating many of the discrepancies
|
||||
between experimental setups that make an actual comparison difficult ---~the
|
||||
reason that prompted us to make \cesasme{} in \autoref{chap:CesASMe}. Indeed,
|
||||
with such a modular tool, it would be easy to run the same experiment, in the
|
||||
same conditions, while only changing \eg{} the frontend model but keeping a
|
||||
well-tried backend model.
|
||||
|
|
@ -71,26 +71,10 @@ the advantage of being automatic; our frontend model significantly improves a
|
|||
backend model's accuracy and our dependencies model significantly improves
|
||||
\uica{}'s results, while being consistent with a dynamic dependencies analysis.
|
||||
|
||||
These models, however, should become really meaningful only when combined
|
||||
together ---~or, even better, when each of them could be combined with any
|
||||
other model of the other parts. To the best of our knowledge, however, no such
|
||||
modular tool exists; nor is there any standardized approach to interact with
|
||||
such models. The usual approach of the domain to try a new idea, instead, is to
|
||||
create a full analyzer implementing this idea, such as what we did with \palmed{}
|
||||
for backend models, or such as \uica{}'s implementation, focusing on frontend
|
||||
analysis.
|
||||
|
||||
In hindsight, we advocate for the emergence of such a modular code analyzer.
|
||||
It would maybe not be as convenient or well-integrated as ``production-ready''
|
||||
code analyzers, such as \llvmmca{} ---~which is packaged for Debian. It could,
|
||||
however, greatly simplify the academic process of trying a new idea on any of
|
||||
the three main models, by decorrelating them. It would also ease the
|
||||
comparative evaluation of those ideas, while eliminating many of the discrepancies
|
||||
between experimental setups that make an actual comparison difficult ---~the
|
||||
reason that prompted us to make \cesasme{} in \autoref{chap:CesASMe}. Indeed,
|
||||
with such a modular tool, it would be easy to run the same experiment, in the
|
||||
same conditions, while only changing \eg{} the frontend model but keeping a
|
||||
well-tried backend model.
|
||||
Evaluating the three models combined as a complete analyzer would have been
|
||||
most meaningful. However, as we argue in \autoref{chap:wrapping_up} abvoe, this
|
||||
is sadly not pragmatic, as tools do not easily combine without a large amount f
|
||||
engineering.
|
||||
|
||||
\bigskip{}
|
||||
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
\importchapter{40_A72-frontend}
|
||||
\importchapter{50_CesASMe}
|
||||
\importchapter{60_staticdeps}
|
||||
\importchapter{90_wrapping_up}
|
||||
\importchapter{99_conclusion}
|
||||
|
||||
\printbibliography{}
|
||||
|
|
Loading…
Reference in a new issue