Start writing about benchmarking

This commit is contained in:
Théophile Bastian 2018-08-04 02:59:15 +02:00
parent b1e3d488a5
commit ec17205086

View file

@ -740,13 +740,49 @@ data could be efficiently compressed in this way.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Benchmarking}
Benchmarking turned out to be, quite surprisingly, the hardest part of the
project. It ended up requiring a lot of investigation to find a working
protocol, and afterwards, a good deal of code reading and coding to get the
solution working.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Requirements}
\todo{}
To provide relevant benchmarks of the \ehelfs{} performance, one must sample at
least a few hundreds or thousands of stack unwinding, since a single frame
unwinding with regular DWARF takes the order of magnitude of $10\,\mu s$, and
\ehelfs{} were expected to have significantly better performance.
However, unwinding over and over again from the same program point would have
had no interest at all, since \prog{libunwind} would have simply cached the
relevant DWARF row. In the mean time, making sure that the various unwinding
are made from different locations is somehow cheating, since it makes useless
\prog{libunwind}'s caching. All in all, the benchmarking method must have a
``natural'' distribution of unwindings.
Another requirement is to also distribute quite evenly the unwinding points
across the program: we would like to benchmark stack unwindings crossing some
standard library functions, starting from inside them, etc.
Finally, the unwound program must be interesting enough to enter and exit a lot
of function, nest function calls, have FDEs that are not as simple as in
Listing~\ref{lst:ex1_dw}, etc.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Presentation of \prog{perf}}
\todo{}
\prog{Perf} is a \emph{profiler} that comes with the Linux ecosystem (actually,
\prog{perf} is developed within the Linux kernel source tree). A profiler is a
program that analyzes the performance of programs by recording the time spent
in each function, including within nested calls.
For this purpose, the basic idea is to stop the traced program at regular
intervals, unwind its stack, write down the current nested function calls, and
integrate the sampled data in the end.
\todo{Is it necessary to write more about \prog{perf} here?}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Benchmarking with \prog{perf}}