Understanding DTrace ustack helpers
...or, everything you ever wanted to know about stack traces
I promised this post over a year ago, and now that someone's actually working on a new ustack helper, I thought it was finally time to write about what ustack helpers are, how they work, and how I went about building one for Node.js. Only a handful of ustack helpers have ever been written: Node, Java, Python, and PHP (the last of which is believed lost to the sands of time), so this post is mainly for a narrow audience of developers, plus anyone who's interested in how this all works.
This post covers a lot of the background you need to understand the details. For more information, check out my ACM article on Postmortem Debugging in Dynamic Environments. Though this post covers dynamic tracing, the challenges are similar to those for postmortem debugging, since both involve operating on a snapshot of program state without the aid of the VM itself.
The value of stack traces
Quick review: DTrace is a facility for dynamically instrumenting all kinds of operations on a system -- particularly systems in production. It's available on OS X, illumos distributions (including SmartOS and OmniOS), Solaris 10 and later, and BSD.
The hundreds of thousands of probes on a typical system can be combined with various DTrace actions to gather incredibly specific data. Some examples:
- When a program opens a specific file, grab a stack trace (to figure out where in the program is opening the file).
- When a program writes a particular string to stderr, take a core dump (to debug why it hit some particular error case).
- When any program opens a specific file, print the program name and pid (to figure out who's accessing a file).
- At a frequency of 97Hz, if a given process is on-CPU, grab a stack trace (to profile it, to see where it's spending CPU time).
- When a given syscall returns a given errno (e.g., close(2) returns EBADF), save a core file of the current process (to debug why that happened -- see my previous post).
- When malloc() returns NULL for a process, grab a stack trace (to see who's failing to allocate memory).
This is just a sampling, of course. Among probes that work "out of the box" today are:
- Process events: processes created, exec'd, exited; signals sent and delivered
- System calls: every syscall entry and exit, which makes it easy to trace files opened, filesystem reads and writes, and other specific events in a process
- Disk I/O: I/Os started, completed
- Network: IP and TCP events (packets received and dropped, state transitions)
- Virtual memory: pageout, pagein events
- Nearly any function entry and exit in the kernel
- Nearly any native function entry and function exit in any userland process
- Nearly any instruction in any userland process
- Apache: all kinds of server events
- Node.js: HTTP request received, response sent, garbage collection start/done, and so on
- Postgres: all kinds of server events
- Java, Perl, Python, and Erlang: various runtime operations (often including function entry/exit)
With a tiny amount of work, you can also add your own probes to Node.js, Lua, Ruby, and Perl.
With the ability to grab a stack trace when any of these events happens, you can analyze performance (e.g., profiling on-CPU time) or debug a particular problem (e.g., "why does my Node process keep calling gettimeofday?"). But while DTrace users just call ustack()
to get a stack trace, under the hood the process of recording a stack trace at an arbitrary point in a process running in production is deceptively tricky, and that's what this post is about.
Aside: x86 stacks
What exactly is a stack trace? And how are debugging tools like DTrace able to print them out?1
Within a process, each thread has a stack, which keeps track of functions called, their arguments, and their local variables. On Intel systems (including x86 and amd64), there are two important pieces of state related to the stack:
- The stack pointer (register
%esp
(32-bit) or%rsp
(64-bit)) points to the next byte of free memory on the stack. - The frame pointer (or base pointer, register
%ebp
(32-bit) or%rbp
(64-bit)) points to the first address in the current stack frame. This value in turn usually points to the top of the previous frame, so the frame pointer is essentially the head of a linked list of stack frames.2
There's also an instruction pointer (register %eip
(32-bit) or %rip
(64-bit)) that points to the currently executing instruction in memory.
When one function calls another function, here's what happens with the stack:
- The
call
instruction in the parent function pushes the current value of the instruction pointer (register%eip
/%rip
) onto the stack, then jumps to the first instruction in the called function. - The first step inside the called function is typically to push the current value of the frame pointer onto the stack, and to copy the current stack pointer into the frame pointer register. The called function then executes. It may use more stack space for variables and to call other functions, but when it's ready to return to the caller, the stack is in the same state as when the function started.
- When ready to return, the function pops the top value of the stack into the frame pointer register. The
ret
instruction pops the new top of the stack into the instruction pointer, causing control to jump back to the calling function.
This is what the whole thing looks like:
If this is new to you, it's worth noting how deep this is: this is how control flows between functions in native programs. There's no magic: the notion of functions in C essentially fall out of a stack and a few basic instructions.
You can see this pattern by disassembling any native function. Here's an example, looking at the code for fork
in libc (the userland function, which calls forkx()
to do most of the work and then invoke the fork
system call):
$ mdb -p $$Loading modules: [ ld.so.1 libc.so.1 ]> fork::dislibc.so.1`fork: pushl %ebplibc.so.1`fork+1: movl %esp,%ebp...libc.so.1`fork+0x19: call -0x246...libc.so.1`fork+0x22: popl %ebplibc.so.1`fork+0x23: ret
The result of these stack manipulations is that at any given time, the frame pointer register points to the head of a linked list that has one entry for every function on the call stack, up to the top of the stack. The frame pointers pushed onto the stack in each function represent the "next" pointers of the linked list, and the return address pointers pushed by the call
instructions denote the address that called the next function.
If you had a snapshot of a process memory state (i.e., a core dump), you can imagine a simple algorithm for constructing a stack trace:
callstack = [ current_%eip ];frameptr = current_%ebpwhile (frameptr != NULL) { callstack.push(value adjacent to frameptr); frameptr = *frameptr}
In other words, the top frame is denoted by the current instruction pointer. Then we start with the current frame pointer register, and follow the linked list of frame pointers until we get to the top of the stack. Along the way, we record the instruction pointer that was saved on the stack.
There's one critical step left. This algorithm gives us a bunch of instruction addresses -- but we want human-readable function names, not memory addresses. In a traditional debugger, a memory address is pretty easy to translate to a function name, because the symbol table that's in the process (or core file) necessarily includes the address of each function and the size of each function. In MDB, you can see this with the "::nm"
> ::nm ! grep -C2 -w fork...0xfee02581|0x00000094|FUNC |GLOB |0x3 |15 |execlp0xfedece61|0x0000015b|FUNC |GLOB |0x3 |15 |_D_cplx_div_ix0xfee76395|0x00000024|FUNC |GLOB |0x3 |15 |fork0xfee1b6f3|0x00000057|FUNC |GLOB |0x3 |15 |nss_endent0xfedef150|0x00000019|FUNC |GLOB |0x3 |15 |atomic_or_ushort_nv
In this case, the fork
function is stored at address 0xfee76395
and is 0x24 bytes long. From this, the debugger knows that when the instruction pointer is 0xfee763b3
, that's inside the "fork" function, and is more conveniently printed as fork+0x1e
(instruction offset "0x1e" inside the "fork" function).
As I mentioned above, this all sounds highly specific to native code, but as we'll see, dynamic environments like Node.js do basically the same thing, with some twists that we'll get to later.
Native stack traces in DTrace
It's well-understood how to grab stack traces from a debugger, but saving a stack trace in the context of a DTrace probe is a bit more complicated. Recall that DTrace records and buffers data in the kernel for later consumption by the "dtrace" command. This buffering decouples the data source (events of interest) from the consumer (the "dtrace" command), which allows the system to perform much better than traditional tools like "strace" or "truss", which actually stop the target process for long enough for the tool itself to step in and record data. This decoupling also allows DTrace to instrument extremely delicate contexts, including the kernel's pagefault handler and other critical interrupt handlers. It wouldn't be possible to instrument these contexts if the instrumentation itself had to wait for the "dtrace" process to step in and execute.
It might seem like these contexts are uninteresting to application developers, but they're actually quite relevant for a number of use cases:
- Profiling an application involves sampling stacks based on some timer -- an interrupt context.
- To see what processes are suffering the most from having been paged out, a useful trick is to instrument the VM's pagein probe, since these events represent synchronous delays of potentially many milliseconds.
- For Node especially, it's often interesting to know when your process comes off-CPU, and what it's doing that caused the kernel to take it off-CPU. That's a one-liner -- but requires instrumenting the scheduler.
In order to support instrumenting these contexts, the actions that you're allowed to take in a probe are pretty constrained. You can't do anything that requires servicing a pagefault, for example, or enter an infinite loop. And since DTrace can't know what loops will be infinite, it doesn't allow explicit loops at all.
These constraints make saving a userland stack trace from DTrace tricky. Walking the stack as we described above almost always works, because the stack memory is rarely paged out, but that whole bit about accessing the process's symbol table to convert memory addresses to their human-readable names? That can easily involve a bunch of operations that are at the very least expensive, and at worst impossible in probe context (because the pages may not be available).
To work around this problem, DTrace defers the resolution of symbol names until the data is consumed. That is, when a probe fires and you've asked DTrace to print out a stack trace, all that's done immediately is to record the process id and the list of memory addresses that make up the call stack. When your dtrace
process in userland consumes the data, it uses system debugger interfaces to look up the addresses in the process that was traced and translate them to the appropriate function name as described above.3
What about dynamic environments?
All this works well for native code, but what about dynamic environments? These can vary wildly. I'm going to focus on Node.js, since that's the one I'm very familiar with.
Node.js is based on V8, the JavaScript VM that also powers Google's Chrome browser. With V8, the basic process for walking the stack works basically the same as for a native program (walking frame pointers, recording the adjacent instruction pointers), but there's a huge problem when it comes to resolving instruction pointer addresses to human-readable names: the compiled code for JavaScript functions don't correspond to symbols in the process's symbol table!
For a native process, we knew 0xfee76395 was inside the fork
function because the process has a symbol table (built when the program was compiled, linked, and loaded) that says that fork
starts at 0xfee76395 and is 0x24 bytes long. But in a JavaScript program, we may have a function at address 0x8004039, and that address doesn't correspond to anything in the process's symbol table. That's because that function didn't exist when the program started: it was dynamically created when some JavaScript code used the function
keyword to define a new function, and V8 compiled that to native code, and stored those instructions in the heap. There's no way for a native code debugger to "know" that this corresponds to, say, the fs.readFile
JavaScript function.
Suppose you had some way to ask V8: what's the name of the JavaScript function at 0x8004039? DTrace could do the same thing it does for native stack traces, which is to record just the addresses and resolve these names later, right? Unfortunately, that doesn't work for dynamic environments because functions themselves are stored on the runtime heap and can actually move around during execution as a result of garbage collection or reoptimization. So the function at 0x8004039 may no longer be at 0x8004039 when the "dtrace" command gets around to resolving the name. We have to resolve the name when we actually record the stack trace.
Enter ustack helpers
So we have these constraints:
- We must record the stack trace inside the kernel, using only operations that are safe to execute in a DTrace probe.
- We must resolve the symbol names when we record the stack trace -- again, in the kernel, using only safe operations.
- The process of resolving symbol names is totally VM-dependent, and like good software engineers, we don't want to encode VM-internal details in some other component (like the OS kernel).
These constraints essentially define the solution: VM implementors write a chunk of code in D that knows how to translate a (frame pointer, instruction pointer) pair into a human-readable function name. The code is safe by virtue of being in D, which can only express operations that are safe in DTrace probes. That code (the helper) gets glued onto the binary during the build process and loaded into the kernel when the VM process starts up. When a user needs to get a stack trace from that process, the kernel executes the helper to resolve the function names.
The helper translates a frame pointer and instruction pointer into a human-readable function name. In this above example, it translates 0x8004039 to "fs.readFile". (Since JavaScript function names are not unique, the Node ustack helper actually translates it to something more complete like "(anon) as fs.readFile at fs.js line 123".)
The guts of a ustack helper are totally dependent on the dynamic environment it's targeted at. The V8 helper uses the fact that when V8 generates the instructions to call a JavaScript function that's been compiled to native code, it pushes onto the stack a pointer to a C++ object that it uses internally to keep track of the function being called. From that object, we can follow pointers to get the name of the function, the location where it was defined, and so on.
So you want to build a ustack helper
As I mentioned above, a few ustack helpers have been written: for Java, for Python, and for Node.js. I know two more that people have expressed interest in developing: Erlang and Ruby. In general, I'd break the process down into a few steps.
- Collect some example core files from known programs, where you know what the correct stack trace should look like.
- Study the source code and the core files until you can construct the correct stack trace by hand from the core file. That is, it's not necessarily an automated procedure, but you can at least identify the right frames and for each one, a set of manual steps that will get you to the function names.
- Automate printing of the stack trace, based on the manual algorithm you came up with in step 2.
- Implement (3) in a D script: that will become the ustack helper.
Step 1: collect some example cores
The process of figuring out how a dynamic environment lays out its stack can be grueling and time-consuming. To explore at your own pace, it's crucial to have a core file from a known program, where you know the correct stack trace. When I started, I spent enough time reading the V8 source to discover that there were a few different types of frames in V8, including regular JavaScript frames, constructor frames, argument adaptor frames, and a few others. Then I wrote a fairly simple Node program that exercised a bunch of these cases: from the top-level, it calls a regular top-level function, which calls a constructor, which calls another constructor, a method (defined as an anonymous function), and so on, until the last function goes into an infinite loop. That way, once I started the program, I could use gcore(1M)
to create a core file from the running program. The result is that I had a memory snapshot I could play with in the debugger that had most types of frames I would care about. I could play around with this at my leisure. Later I would profile the same program to test the ustack helper.
Step 2: figure out how to manually produce a stack trace
This is where you'll spend much of your time. The difficulty here depends a lot on how complex the environment is and on the quality of the runtime's source code and documentation. (For some environments (like Perl), it may be impossible to write a ustack helper, at least without new DTrace features. ustack helpers assume at the most fundamental level that stacks are laid out just as they are in native code, using frame pointers and instruction pointers. There's nothing that says a runtime environment has to actually do it that way.)
For V8, the basic process was simple, though it still took a while to work out. I started with the code that V8 itself uses when constructing a stack trace, as when you print an Error's stack trace. I studied it for a while, took a lot of notes, and tried to replicate the algorithm by hand from the core file.
I strongly recommed building tools to help yourself. I used MDB, specifically because it makes it easy to write new commands with C code. I quickly wrote a bunch of commands to tell me for a given memory address, what I was looking at. This was critical: long before I was able to print a stack trace, I had learned that the first step was to print out the "Function" object that V8 stores on the stack, and that that object refers to a SharedFunctionInfo object that includes the name, and that that points to a Script object that includes the script name where the function was defined. The function and script names are stored as Strings, which were AsciiStrings or ConsStrings. So the first thing I did was to write debugger commands that could identify what kind of object I was looking at. This became the "::v8type" MDB command:
> a7790941::v8type0xa7790941: JSFunction
Then I wrote commands to print out the C++ objects so I could inspect them. This became the "::v8print" MDB command:
> 0xa7790941::v8printa7790941 JSFunction { a7790941 JSObject { a7790941 JSReceiver { a7790941 HeapObject < Object { a7790940 map = 9f009749 (Map) } } a7790944 properties = a26080a1 (FixedArray) a7790948 elements = a26080a1 (FixedArray) } a7790950 prototype_or_initial_map = ba9080a1 (Oddball: "hole") a7790954 shared = a777b6f5 (SharedFunctionInfo) a779095c literals_or_bindings = a7790965 (FixedArray) a7790960 next_function_link = ba9299a1 (JSFunction)}
Then I wrote commands for decoding the string objects as strings. This became "::v8str":
> 0xa7790f81::v8str"WriteStream.write"> 0xa7790f81::v8str -vConsString ptr1: a2615b7d ptr2: 892ef331 SeqAsciiString length: 11 chars (11 bytes), will read 11 bytes from offset 0 SeqAsciiString length: 6 chars (6 bytes), will read 6 bytes from offset 0"WriteStream.write"
It sounds like a lot of work up front, but it paid off big when I could poke around much more easily: I could start with a pointer from the stack that I thought should be a Function object, and explore what information it pointed to. For example, if I have this frame from the native stack (which you can get with $C
in MDB):
Frame ptr Instruction ptr0804792c 0x7560e19a
I discovered from the source that it looked like a JSFunction object was pushed two words below the frame pointer, so I checked that out:
> 0804792c-0x8/p0x8047924: 0xa7737d65 > 0xa7737d65::v8type0xa7737d65: JSFunction> 0xa7737d65::v8printa7737d65 JSFunction { a7737d65 JSObject { a7737d65 JSReceiver { a7737d65 HeapObject < Object { a7737d64 map = 9f009749 (Map) } } a7737d68 properties = a26080a1 (FixedArray) a7737d6c elements = a26080a1 (FixedArray) } a7737d74 prototype_or_initial_map = ba9080a1 (Oddball: "hole") a7737d78 shared = ba941471 (SharedFunctionInfo) a7737d80 literals_or_bindings = a26080a1 (FixedArray)}
and so on.
Besides the ability to explore more easily, with not much more work, I wrote a few commands to print the V8 representations of objects, arrays, and so on as their JavaScript values -- which gave me postmortem debugging for JavaScript as well. This became "::jsprint":
> 1f712ffd5601::jsprint{ protocol: "http:", slashes: true, auth: null, host: "www.snpp.com", port: null, hostname: "www.snpp.com", hash: null, search: null, query: null, pathname: "/episodes/3F02.html", path: "/episodes/3F02.html", href: "http://www.snpp.com/episodes/3F02.html",}
This brings up a related point: writing a ustack helper is grueling, and I found it never paid to take shortcuts. I'd rather make only a little progress each day, knowing what each step was doing, than to try to keep it all in my head and be confused why things didn't work as expected.
Back to the task at hand. Since you wrote the example program, you know what the stack trace in the core file should look like. (You can even have the program print the stack trace using the runtime's built-in mechanism -- in JavaScript, this would be console.log(new Error().stack)
. The first milestone will be when you can construct that stack trace by hand from the core file. That is, when you can look at the frame pointer in %ebp, follow that to the other frames, and for each one, find the right pointers to follow to get you to the name of the function at that frame.
Step 3: automate printing a stack trace
The next step is to automate that process of printing out the stack trace. As with the rest of this project, I'd strongly recommend building this incrementally. First build commands that can print out a frame pointer with a useful description of the function it references:
> 0804792c::jsframe -v8047940 0x756266db _tickFromSpinner (a7737dad) file: node.js posn: position 13051
and then implement something that walks the stack and labels each frame. You could skip the first step, but you need to automate this procedure to build the ustack helper anyway, and it will help significantly to iron out the details in a traditional language like C, which is easier to write and has rich facilities for debugging.
Once this works reliably, create some more example programs and test it on those. Create core files from production processes and test it on those, too. You may find that there were some cases you missed in your example programs.
Step 4: build the ustack helper
Once you have an automated command that reliably prints a stack trace for an arbitrary core file, you've got to implement that same logic in D.
The basic idea is that you define a D program with pseudo-probes called "dtrace:helper:ustack:". The interface is:
- The entire script is executed for each frame in the stack. You're not responsible for walking the stack; you've just got to translate the current frame to a string.
arg0
andarg1
are the current frame's instruction pointer and frame pointer, respectively. This is all the context you have.- The last value in the last clause should be an ASCII string describing the frame. In practice, this is usually something you've allocated with the DTrace
alloca()
subroutine and then filled in yourself. - If a frame's label starts with "@", the string is considered an annotation, rather than a replacement for the name DTrace would have used. For JIT'd environments like V8, this isn't useful, because the name DTrace would have used is just a memory address that's likely not meaningful to anybody. For environments like Python, though, the original name might be "py_val", which might be useful to a VM developer.
There are some patterns that have emerged in the existing ustack helpers:
- Use "this" variables to store state. These are clause-local: they'll persist through the execution of the script on a single frame. These aren't initialized by default, so you'll want to clear these at the start to avoid inadvertently picking up values from previous invocations.
- At the beginning, allocate a fixed-size string to store your result. I kept track of this as
this->buf
. You'll also want to keep track of your current offset in the string, asthis->off
. - It's helpful to have macros like
APPEND_CHR(some_character)
, which is usually just:#define APPEND_CHR(c) (this->buf[this->off++] = (c))
Then build up macros like APPEND_NUM (for decimal numbers) and APPEND_PTR (for pointer values). See the V8 ustack helper for details. - When done, set
"this->done = 1"
. All of your clauses after the first should predicate/!this->done/
.
The V8 ustack helper built up several more complicated layers of APPEND macros for the various kinds of strings in V8.
The rest is entirely runtime-specific, so all I can offer are some general tips:
- Start with a trivial ustack helper and integrate that into your runtime's build system. Here's the simplest example I came up with for the simplest possible program. It prints out "hiworld" for each frame.
- If there's an error in your D script, DTrace will completely ignore the translation for that frame. It's very hard to debug this, so I strongly recommend an incremental approach. If I was really lost, I would set this->done in some early clause, see if I got output, and move that back until I found the clause that was failing.
- As you build up the helper, use the output string to debug. DTrace does have a helper tracing mechanism that's beyond the scope of this blog post, but it's rather low-level. I found it easier to use printf-style debugging: inserting debug statements directly into the output string, so they'd show up in the DTrace output. So I'd first print out the address of some pointer, then I'd try decoding it as a string, and so on. Since you've already integrated the helper into the program's build system, you can iterate pretty quickly.
You can also consider writing a plain old D script to iterate on most of the logic for the helper. The downside is that once you get it working perfectly, if there's an error after you translate the script into a helper, it'll be hard to track down where the error was. I usually found it easier to develop the helper itself.
Profit
There's no doubt this is all a lot of work, but the resulting observability has proven tremendously valuable for our work with Node.js. We use the ustack helper primarily to profile Node programs, but also to debug them (i.e., to find out what part of a program is responsible for some other system event), and we use it both in development and in production.
Besides that, the result of this project was not just the ustack helper, but a complete postmortem debugging environment for Node programs. We configure most of our services to dump core automatically when an uncaught exception is thrown, and we've root-caused dozens of nasty bugs (including several in Node core) from the core files.
Caveats
If you've used ustack helpers at all before, you've probably already discovered that they don't work on OS X. I'm told that the way to get this changed is to contact your Apple developer liaison (if you're lucky enough to have one) or file a bug report at bugreport.apple.com. I'd suggest referencing existing bugs 5273057 and 11206497. I'm told that more bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this.
That's all I've got. If you've got questions, your best bet is the dtrace-discuss list. I follow that, as do many others working with DTrace. You can also comment below or tweet me @dapsays.
Many thanks to Bryan, who implemented support for ustack helpers, encouraged me to take on the V8 ustack helper project, and helped enormously along the way.
Footnotes
1 This section looks specific to C/C++, but the details are largely the same in dynamic environments like Java, Node.js, and Python.
2 I say "typically" because it's possible to disable this behavior at compile-time with gcc. This is ostensibly done for the performance improvement of having an additional register available and to avoid pushing and popping the frame pointer, though reports of performance improvements never appear to be based on hard data, and it's highly doubtful that the effect would be measurable on most programs. On the other hand, the resulting breakage prevents traditional debuggers and DTrace from grabbing stack traces on such programs. (Even if one could show an actual performance benefit, it's important to consider that a few percent performance improvement likely does not justify giving up the ability to observe the performance of an application in production, since that precludes many other future improvements.)
3 This leads to the slightly annoying behavior that if you try to trace callstacks from a short-lived process, the process may be gone by the time the userland processes goes to resolve the function names, and you wind up with only the memory addresses. You can work around this by using "-c" to start the process, or "-p" to attach to an existing one. This causes DTrace to attach to the process so that after it exits, the process will stick around until DTrace gets what it needs from the process. While slightly annoying, it's a small price to pay for instrumenting arbitrary contexts in production.
Post written by David Pacheco