In case the user doesn't specify whether to unwind using the ARM specific
unwind tabler or DWARF info libunwind should prefer the latter. Since DWARF
expressions are more powerful than the ARM specific unwind tables
arm_find_proc_info is changed to check for DWARF first.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
Prevents unw_step from trying to unwind the stack using the ARM specific
unwind tables in case the DWARF based unwinding was successful.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
Initialize the return value with -1 in order prevent arm_find_proc_info from
returning zero. This could happen in case the environemtn variable
UNW_ARM_UNWIND_METHOD doesn't allow exidx and/or dwarf unwinding.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
Change _UPTi_find_unwind_table to also look for the ARM specific unwind
information. Adjust the ARM unwind code to read memory using the accessor
routines.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
Rename the dwarf dl_iterate_phdr callback routine and the callback_data
structure to dwarf_callback and dwarf_callback_data. Make it available
within libunwind by declaring the two at the dwarf.h header file.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
A previous change reduced the number of arguments that this function
tasks, but one call at least did not get updated, resulting in a build
failure on ia64-linux. This patch fixes it.
On ia64-hpux version 11.31, <sys/ptrace.h> has been removed.
This patch adds a configure check for this header file, and only
includes <sys/ptrace.h> if it exists.
This patch add support for resuming at a certain stack frame even if signal
frames are involved. For restoring the registers the trampoline (sigreturn)
is used. RT and non-RT signal frames are handled for both >=2.6.18 and
<2.6.18 kernels.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
This patch adds a few more patterns to the check that detects if the IP
points to a sigreturn sequence.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
Define an appropriate fdesc struct and its corresponding accessors that take
care of the thumb marker on ARM. Call the __clear_cache built-in instead of
flush_cache if the GNU compiler is used.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
The test-async-sig.c, test-flush-cache.c and Ltest_resume_sig.c define
UNW_LOCAL_ONLY and therefore only need LIBUNWIND_local. Gtest-dyn1.c is
calling '_U_dyn_cancel' and test-trace.c is using 'unw_backtrace' which
are in LIBUNWIND_local.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
Insert static branch prediction predicates in useful places and avoid
unnecessary code in the hottest paths. Bypass unnecessary indirect
calls, in particular to access_mem(), when known to be safe.
Since the fast unwinding code path doesn't need the full context,
a faster target dependent getcontext is implemented.
Signed-off-by: Lassi Tuura <lat@cern.ch>
Creating an alternate signal stack with a size of SIGSTKSZ (usually 8k) is
not enough on some targets because unw_cursor_t is bigger than that already.
Since the size of unw_cursor_t is part of the ABI the UNW_TDEP_CURSOR_LEN
can't be changed without breaking existent code. Therefore size of the
alternate signal stack has been increased to 1 MiB.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
In order to have the DWARF_* macros working properly a generic and a local
variant of the ex_tables.c have been created.
Signed-off-by: Ken Werner <ken.werner@linaro.org>
We'd like to avoid calls to all malloc related functions
so libunwind is still usable from such allocators.
Signed-off-by: Paul Pluzhnikov <ppluzhnikov@google.com>
Dropping the extra frame for unw_backtrace itself using unw_step is
approximately 15% slower than skipping the frame in tdep_trace. So
drop the frame in the latter, and make the function a private
implementation detail for libunwind, not an exported interface.
Also moves unw_getcontext call back into unw_backtrace to avoid an
extra call frame in case slow_backtrace does not get inlined into
unw_backtrace.
Adds new function to perform a pure stack walk without unwinding,
functionally similar to backtrace() but accelerated by an address
attribute cache the caller maintains across calls.