Linux dump thread stack

How to dump thread stacks

I want to dump the stacks of the threads in a file. How can I do that in linux? How can I find the starting address of the stack and its size? Note that I want to do this progammatically from the same process (not using ptrace, gdb or something like that).

If you are using GCC, there might be some internal functions to get the stack pointer, otherwise you can find the address of the stack in a file in the /proc/self/ directory.

5 Answers 5

if you use the gnu c lib, you can use the backtrace() function

Use the pthread_attr_getstack function; this yields the thread’s stack address and size.

This won’t do the job. If the thread is using a system-allocated stack, pthread_attr_getstack will return address and size 0.

@MetallicPriest That would work I suppose, but it is by definition non-portable :-). That may not matter, depending on the OP’s need.

But on my system, which is Ubunutu 10 64 bit, pthread_attr_t has a strange definition, which is different from what it should be, typedef union < char __size[__SIZEOF_PTHREAD_ATTR_T]; long int __align; >pthread_attr_t;

@denniston.t, you’re right. I think it’s the same on Solaris, and probably other Unixen. Can it be relied upon if the thread has a user-defined stack?

@BrettHale: Well, if the thread stack is user-defined, it means your application already has all the information needed because your application allocated the stack. Meaning you don’t need to call that function anyway.

Use gdb to attach to a running process via its PID (process ID):

And then type bt to get a backtrace.

Glibc has function called backtrace which does what you want.

Last time I tried it, results were less than perfect, but somewhat useful. YMMV.

Why do you want to dump your threads’ stacks??

Do you want to get some application checkpointing ? If you want it, there are some libraries implementing it, even imperfectly, but usefully in practice.

Читайте также:  Редактирование php файлов linux

The point is, that even if you manage to dump your threads’ stacks in a file, I’m not sure you’ll be able to do somehing useful with that file. You won’t even be able to restart your application using these stacks, because when restarted (even in the same configuration) the stacks might be located elsewhere (because of ASLR), unless you write a 0 digit into /proc/sys/kernel/randomize_va_space

I heard there is also some linux libraries which force a running process to dump a core file (that you could examine with gdb later) without aborting that process.

A call stack is something very brittle, and you cannot re-use it without precautions.

If you just want to inspect your call stack, look into Ian Taylor’s libbacktrace.

Notice that several checkpointing infrastructures (including SBCL’s save-lisp-and-die ) are not capable of restoring any other threads than the main one. That says something about the difficulty of managing pthread -s stacks.

Источник

How to print the current thread stack trace inside the Linux kernel?

I would like to be able to print the stack trace of a thread in the Linux kernel. In details: I want to add code to specific functions (e.g. swap_writepage() ) that will print the complete stack trace of the thread where this function is being called. Something like this:

int swap_writepage(struct page *page, struct writeback_control *wbc) < /* code goes here to print stack trace */ int ret = 0; if (try_to_free_swap(page)) < unlock_page(page); goto out; >if (frontswap_store(page) == 0) < set_page_writeback(page); unlock_page(page); end_page_writeback(page); goto out; >ret = __swap_writepage(page, wbc, end_swap_bio_write); out: return ret; > 

2 Answers 2

Linux kernel has very well known function called dump_stack() here, which prints the content of the stack. Place it in your function according to see stack info.

@rakib is exactly right of course.

In addition, I’d like to point out that one can define simple and elegant macros that help print debug info as and when required. Over the years, I’ve put these macros and conveneince routines into a header file; you can check it out and download it here: «A Header of Convenience».

There are macros / functions to:

  • make debug prints along with funcname / line# info (via the usual printk() or trace_printk()) and only if DEBUG mode is On
  • dump the kernel-mode stack
  • print the current context (process or interrupt along with flags in the form that ftrace uses)
  • a simple assert() macro (!)
  • a cpu-intensive DELAY_LOOP (useful for test rigs that must spin on the processor)
  • an equivalent to usermode sleep functionality
  • a function to calculate the time delta given two timestamps (timeval structs)
  • convert decimal to binary, and
  • a few more.
Читайте также:  Linux cpu governor schedutil

Linked

Hot Network Questions

Subscribe to RSS

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

Site design / logo © 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev 2023.7.12.43529

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

Источник

Capturing a Java Thread Dump

We rely on other people’s code in our own work. Every day.

It might be the language you’re writing in, the framework you’re building on, or some esoteric piece of software that does one thing so well you never found the need to implement it yourself.

The problem is, of course, when things fall apart in production — debugging the implementation of a 3rd party library you have no intimate knowledge of is, to say the least, tricky.

Lightrun is a new kind of debugger.

It’s one geared specifically towards real-life production environments. Using Lightrun, you can drill down into running applications, including 3rd party dependencies, with real-time logs, snapshots, and metrics.

Learn more in this quick, 5-minute Lightrun tutorial:

announcement - icon

Building or modernizing a Java enterprise web app has always been a long process, historically. Not even remotely quick.

That’s the main goal of Jmix is to make the process quick without losing flexibility — with the open-source RAD platform enabling fast development of business applications.

Critically, it has very minimal impact on your server’s performance, with most of the profiling work done separately — so it needs no server changes, agents or separate services.

Simply put, a single Java or Kotlin developer can now quickly implement an entire modular feature, from DB schema, data model, fine-grained access control, business logic, BPM, all the way to the UI.

Читайте также:  Подсистема linux win 10

Jmix supports both developer experiences – visual tools and coding, and a host of super useful plugins as well:

announcement - icon

Slow MySQL query performance is all too common. Of course it is. A good way to go is, naturally, a dedicated profiler that actually understands the ins and outs of MySQL.

The Jet Profiler was built for MySQL only, so it can do things like real-time query performance, focus on most used tables or most frequent queries, quickly identify performance issues and basically help you optimize your queries.

Critically, it has very minimal impact on your server’s performance, with most of the profiling work done separately — so it needs no server changes, agents or separate services.

Basically, you install the desktop application, connect to your MySQL server, hit the record button, and you’ll have results within minutes:

announcement - icon

DbSchema is a super-flexible database designer, which can take you from designing the DB with your team all the way to safely deploying the schema.

The way it does all of that is by using a design model, a database-independent image of the schema, which can be shared in a team using GIT and compared or deployed on to any database.

And, of course, it can be heavily visual, allowing you to interact with the database using diagrams, visually compose queries, explore the data, generate random data, import data or build HTML5 database reports.

announcement - icon

The Kubernetes ecosystem is huge and quite complex, so it’s easy to forget about costs when trying out all of the exciting tools.

To avoid overspending on your Kubernetes cluster, definitely have a look at the free K8s cost monitoring tool from the automation platform CAST AI. You can view your costs in real time, allocate them, calculate burn rates for projects, spot anomalies or spikes, and get insightful reports you can share with your team.

Connect your cluster and start monitoring your K8s costs right away:

Источник

Оцените статью
Adblock
detector