Search code examples
compilationfortranlogical-operators

How can the compiler affect the way logical values are represented in Fortran?


This question is based on these 2 other questions [1] [2], but I figured I'd get more progress if I asked something less specific and worked by myself from there.

According to an user in the first post, depending on the compiler, the values used by Fortran to represent the logical type could vary:

Fortran's logical values do not necessarily map to 0 and 1 as in C or Matlab; rather, the mapping is compiler-dependent.

Assuming his statement is correct, I do not understand how or why. Why is it the case? Why would different compilers not represent a well-stablished data type in the same way? If standardization of data type representation is not a reasonable expectation, why is it not so in Fortran but is the case for the other big languages exemplified? Is it a case of a historical left-over, or is there a good reason for that?

And what are those ways? How could I even determine what my compiler does? Is it in the documentation or can I determine it based on the code output?

Lastly, does using a C wrapper like in the original examples make it more likely a certain choice will be made, or does it not matter at all?

I wasn't able to find much on the issue, albeit I suspect that is because I don't know the right keywords to search for it. Any help shedding light on the matter would be appreciated.

Context

If the context is at all relevant, the compiler I personally used was Mingw-64 6.3, but it was done inside Matlab 2022a with whatever Mex adds. The compiled file was this Lapack C wrapper for Matlab, which can be found here.

I've had some trouble with calling it from within a Simulink subsystem block as the vector containing the logical flags seem to be ignored there regardless of data type (int64/32/16, float, logical, string and int8 not even accepted). Upon calling the 'dtrsen' function with the correct flags, the resulting eigenvalues vector comes out unordered and seemingly random, unlike when I do the same from within a Matlab script, in which the result comes out in descending order of the marked eigenvalues.

To be honest, I've come to personally believe it's probably a Simulink data transfer issue and not a compiler one as the problem doesn't occur when calling the file from within a Matlab script, but I'd like to understand what I'm dealing with better to be sure. I'll still try a few other things, but it's likely I'll end up doing my own S-function at the end of the day, and this knowledge could prevent me from spending hours debugging my mistakes.

Also I'm just a curious guy.

[1] https://scicomp.stackexchange.com/questions/43101/reordering-eigenvalues-in-schur-factorization-matlab-ordschur-and-lapack-dtrse

[2] https://scicomp.stackexchange.com/questions/44497/matlab-ordschur-producing-different-results-from-lapacks-dtrsen-in-simulink

[3] https://mathworks.com/matlabcentral/fileexchange/16777-lapack


Solution

  • There are just two different values, .false. and .true. for the logical type (and the default kind). The exact implementation in memory is not specified by the standard. If the processor is based on binary representation, all bits 0 is an obvious choice for the .false. value, so you will not see anything else in practice. But what about .true.? Is the lowest bit set to 1 or all bits set to 1 the more obvious choice? One represents an integer value of 1, the other -1, for two-complement signed integers.

    The logical value of other bit patterns, than those used for the .false. and .true. values, is not defined by the standard.