While porting our NSIS setup from 2.46 to 3.03 Unicode, I had a problem with a function that works around the maximum string length in NSIS by using the System plugin:
System::Alloc 8096
Exch $1
IntOp $2 $1 + 0
${For} $iLoopIndex1 1 $stackSize
${stack::dll_read} "$stacktext" "$iLoopIndex1" $TmpVal $stackReturn
StrLen $3 "$TmpVal"
System::Call "*$2(&t$3 $\"$TmpVal$\")"
IntOp $2 $2 + $3
${Next}
System::Call "user32::SetWindowText(i $CstPage.InfoExt.Text, i r1) i .s"
Pop $TmpVal
System::Free $1
The function is also using the stack plugin which we ported to support Unicode.
StrLen is giving back the number of characters in the string which is then used to add the string to the buffer. For the Unicode build one character fills two bytes which corrupts the text in the buffer.
I fixed the problem by doubling the result of StrLen.
The question is now:
NSIS Unicode installers uses UTF-16LE strings just like Windows. UTF-16LE is not fixed width in terms of characters (surrogate pairs etc.) but it is safe to double the return value from StrLen
.
A common "string size in bytes" idiom looks like this:
StrLen $1 "$2"
IntOp $1 $1 + 1 ; Add \0 terminator
!if "${NSIS_CHAR_SIZE}" > 1
IntOp $1 $1 * ${NSIS_CHAR_SIZE}
!endif
DetailPrint "$2 is $1 bytes"
NSIS_CHAR_SIZE is the size of a character code unit; 1 in ANSI installers and 2 in Unicode installers.