1
0
Fork 0
mirror of https://github.com/tobast/libunwind-eh_elf.git synced 2024-07-02 13:41:46 +02:00
Commit graph

13 commits

Author SHA1 Message Date
Arun Sharma 52ca68c770 Fix a race condition
There is a window of time between the munmap and the tls_cache being
marked as destroyed, where there could be a bad access to memory that
has been unmapped/freed. Reorder the code a bit to close the window.

Signed-off-by: Paul Pluzhnikov <ppluzhnikov@google.com>
2011-12-16 10:45:51 -08:00
Arun Sharma 1010880548 Address x86_64 crashes when using sigaltstack
The crashes were tracked down to f->rpb_cfa_offset being incorrect.

The problem is that {rsp,rbp}_cfa_offset only have 15 bits, but for
SIGRETURN frame they are filled with:

// src/x86_64/Gstash_frame.c

   f->cfa_reg_offset = d->cfa - c->sigcontext_addr;
   f->rbp_cfa_offset = DWARF_GET_LOC(d->loc[RBP]) - d->cfa;
   f->rsp_cfa_offset = DWARF_GET_LOC(d->loc[RSP]) - d->cfa;

The problem is that the delta here can be arbitrarily large when
sigaltstack is used, and can easily overflow the 15 and 30-bit fields.

When signal handler starts running, the stack layout is:

 ... higher addresses ...
        ucontext
 CFA->
        __restore_rt (== pretcode in rt_sigframe from
                      linux-2.6/arch/x86/include/asm/sigframe.h)
 SP ->
       ... sighandler runs on this stack.

 ... lower addresses ...

This makes it very convenient to find ucontext from the CFA.

Attached patch re-tested on Linux/x86_64, no new failures.

Signed-off-by: Paul Pluzhnikov <ppluzhnikov@google.com>
Reviwed-by: Lassi Tuura <lat@cern.ch>
2011-11-27 18:34:38 -08:00
Arun Sharma 0a26727ea2 Fix TLS destructor ordering problems
Glibc calls thread-specific dtors in the order in which the keys were added,
so the first dtor is the trace_cache_free() one. Then thread-specific
data for some other key is free()d, which calls into unw_backtrace(),
which uses dangling cache and munmapped cache->frames.

[ Minor rename + compiler warning fix: asharma@fb.com ]
Signed-off-by: Paul Pluzhnikov <ppluzhnikov@google.com>
2011-10-29 17:12:36 -07:00
Arun Sharma 08077a4962 pthread_once() workaround for FreeBSD and Solaris
On FreeBSD, as well as on the Solaris < 10, weak pthread_once stub is
always exported from libc. But it does nothing, which means that if
threaded library is not loaded, then pthread_once() call do not actually
call the initializer finction. The construct
  if (likely (pthread_once != 0))
  {
    pthread_once(&trace_cache_once, &trace_cache_init_once);
then fails to initialize the trace cache on x86_64.

Work around by checking that the initializer was indeed called.
Note that this can break if libthr is loaded dynamically, but my belief
is that there is no platforms which allow dynamic loading of the threading
library.
2011-10-29 16:53:30 -07:00
Lassi Tuura d2525ec936 Use single level hash table for fast trace. 2011-05-06 22:09:07 -07:00
Lassi Tuura 5c2cade264 Inline access to initial register values as it's known to be safe. 2011-05-06 20:19:36 -07:00
Lassi Tuura ae5c1f2adf Performance optimisations for fast trace.
Insert static branch prediction predicates in useful places and avoid
unnecessary code in the hottest paths. Bypass unnecessary indirect
calls, in particular to access_mem(), when known to be safe.
2011-04-17 20:34:38 -07:00
Arun Sharma 15f182828d Use __thread instead of pthread_getspecific() 2011-04-05 22:06:51 -07:00
Lassi Tuura 5f38f35d5d Drop a call frame in tdep_trace and avoid a call to unw_step.
Dropping the extra frame for unw_backtrace itself using unw_step is
approximately 15% slower than skipping the frame in tdep_trace.  So
drop the frame in the latter, and make the function a private
implementation detail for libunwind, not an exported interface.

Also moves unw_getcontext call back into unw_backtrace to avoid an
extra call frame in case slow_backtrace does not get inlined into
unw_backtrace.
2011-04-01 00:00:39 -07:00
Lassi Tuura 3b9fd99cb7 Assign copyright as requested by the author. 2011-03-25 00:20:49 -07:00
Lassi Tuura f1ea02be58 Reset 'used' to zero after expanding frame cache hash table. 2011-03-25 00:20:48 -07:00
Lassi Tuura 44a14d1364 Integrate fast trace into backtrace(). 2011-03-24 22:33:55 -07:00
Lassi Tuura 9e98f15e9a Fast back-trace for x86_64 for only collecting the call stack.
Adds new function to perform a pure stack walk without unwinding,
functionally similar to backtrace() but accelerated by an address
attribute cache the caller maintains across calls.
2011-03-24 22:33:17 -07:00