I've created a UTF8 script for PowerShell with non-ascii characters.
characters.ps1:
Write-Host "ç â ã á à"
When the script is run in PowerShell console, it outputs wrong characters.
However, if I write the chars directly in the console, they are shown as expected:
Does anyone knows what causes that behavior?
The problem arised from a script I wrote who has hardcoded paths which include non-ascii characters. When I try to pass the path as argument to cmdlets (in the case I was gonna robocopy a folder) the command fails because it cannot find the path (which is output wrongly in the screen).
Changing the encoding of the script to UTF-8 with BOM solved the issue.
I was using SublimeText with the EncodingHelper plugin to control the character-set of the script. It was set correctly to UTF8.
I changed the encoding of the script in SublimeText to "UTF-8 with BOM" and the output was shown correctly.
I created the same script with Notepad++, which defaults to "UTF-8 with BOM", and the string was shown correctly in the console.
I changed the encoding of the script in Notepad++ to "UTF-8 without BOM" and it was shown incorrectly.
It seems PowerShell cannot guess correctly the encoding of UTF-8 files with no BOM.