The problem is when I build a 32-bit application.exe I get an application with 16-bit machine code.
Here is the code (taken from a book):
.386
.model flat
.const
URL db "http://www.lionking.org/`cubbi/", 0
.code
_start:
xor ebx, ebx
push ebx
push ebx
push ebx
push offset URL
push ebx
push ebx
; call ShellExecute
push ebx
; call ExitProcess
end _start
To build the application I write in console
Then I have an EXEcutable file with 16-bit machine code:
PU = ?86, Uirtual 8086 Mode, Id/Step = 0F62, A20 enabled
09E4:0000 33DB XOR BX,BX
09E4:0002 53 PUSH BX
09E4:0003 53 PUSH BX
09E4:0004 53 PUSH BX
09E4:0005 680000 PUSH 0000h
09E4:0008 0000 ADD [BX+SI],AL
09E4:000A 53 PUSH BX
09E4:000B 53 PUSH BX
09E4:000C 53 PUSH BX
09E4:000D 0000 ADD [BX+SI],AL
09E4:000F 006874 ADD [BX+SI+74h],CH
09E4:0012 7470 JZ Short 0084
I don't need a properly working code. I just want to assembly an application with 32-bit code or I want to understand what I'm doing wrong.
Thank you for paying attention.
Unless you tell the disassembler that your code is 16-bit (or 32-bit) and unless it can guess it somehow (e.g. based on the format of the executable, if any), the disassembler cannot know which one of the two it is.
I've taken the instruction bytes from your 16-bit disassembly and disassembled them as 32-bit code:
00000000:i33DB xor ebx,ebx
00000002:i53 push ebx
00000003:i53 push ebx
00000004:i53 push ebx
00000005:i6800000000 push 00000000
0000000A:i53 push ebx
0000000B:i53 push ebx
0000000C:i53 push ebx
0000000D:i0000 add [eax],al ; 0s between code & data
0000000F:i006874 add [eax+74],ch ; db 0,"ht"
00000012:i7470 je ; db "tp"
This is the correct 32-bit machine code generated from your assembly source and you're not disasembling it correctly. Somehow you're disassmbling it as 16-bit, which is wrong.