dwarf-assembly/benching/hackbench
Théophile Bastian ceeec6ca5d Benching: evaluate hackbench clearly, improve tools 2019-06-10 12:04:52 +02:00
..
.gitignore Cleanup work tree with gitignores 2018-08-08 14:36:53 +02:00
EVALUATION.md Benching: evaluate hackbench clearly, improve tools 2019-06-10 12:04:52 +02:00
Makefile Add hackbench 2018-06-13 19:55:41 +02:00
README.md Benching: evaluate hackbench clearly, improve tools 2019-06-10 12:04:52 +02:00
hackbench.c Add hackbench 2018-06-13 19:55:41 +02:00
to_report_fmt.py Benching: evaluate hackbench clearly, improve tools 2019-06-10 12:04:52 +02:00

README.md

Running the benchmarks

Pick some name for your eh_elfs directory. We will call it $EH_ELF_DIR.

Generate the eh_elfs

../../generate_eh_elf.py --deps -o "$EH_ELF_DIR" \
  --keep-holes -O2 --global-switch --enable-deref-arg hackbench

Record a perf session

perf record --call-graph dwarf,4096 ./hackbench 10 process 100

You can arbitrarily increase the first number up to ~100 and the second to get a longer session. This will most probably take all your computer's resources while it is running.

Set up the environment

source ../../env/apply [vanilla | vanilla-nocache | *eh_elf] [dbg | *release]

The first value selects the version of libunwind you will be running, the second selects whether you want to run in debug or release mode (use release to get readings, debug to check for errors).

You can reset your environment to its previous state by running deactivate.

If you pick the eh_elf flavour, you will also have to

export LD_LIBRARY_PATH="$EH_ELF_DIR:$LD_LIBRARY_PATH"

Actually get readings

perf report 2>&1 >/dev/null