Cannot find (or formulate a question to Google to find) an answer on the simple (or noob) question.
I'm inspecting an application with objdump -d
tool:
. . .
5212c0: 73 2e jae 5212f0 <rfb::SMsgReaderV3::readSetDesktopSize()+0x130>
5213e8: 73 2e jae 521418 <rfb::SMsgReaderV3::readSetDesktopSize()+0x258>
521462: 73 2c jae 521490 <rfb::SMsgReaderV3::readSetDesktopSize()+0x2d0>
. . .
What does it mean the +XXXX
offset in the output? How can I relate it to the source code, if possible? (Postprocessed with c++filt
)
It's the offset in bytes from the beginning of the function.
Here's an example from WinDbg, but it's the same everywhere:
This is the current call stack:
0:000> k L1
# Child-SP RetAddr Call Site
00 00000000`001afcb8 00000000`77b39ece USER32!NtUserGetMessage+0xa
This is how the function looks like:
0:000> uf USER32!NtUserGetMessage
USER32!NtUserGetMessage:
00000000`77b39e90 4c8bd1 mov r10,rcx
00000000`77b39e93 b806100000 mov eax,1006h
00000000`77b39e98 0f05 syscall
00000000`77b39e9a c3 ret
And this is what the current instruction is:
0:000> u USER32!NtUserGetMessage+a L1
USER32!NtUserGetMessage+0xa:
00000000`77b39e9a c3 ret
So, the offset 0x0A
is 10 bytes from the function start. 3 bytes for the first mov
, 5 bytes for the second mov
and 2 bytes for the syscall
.
If you want to relate it to your code, it heavily depends on whether or not it was optimized.
If the offset is very high, you might not have enough symbols. E.g. with export symbols only you may see offsets like +0x2AF4
and you can't tell anything about the real function any more.