The signum
function is the implementation of the usual mathematical definition of sign, which conditionally returns a value in {-1,0,1}. It is a theoretical definition, and as such, it doesn't take into account computational costs of operations, or data type of values, so that multiplying by (-1) is a zero cost theoretical way of changing sign. Because of that, it is not the most useful treatment of sign for programming.
The case signum a == 0
is not really useful, as you can always test directly a == 0
, without the extra cost of computing signum a
. As for the other two values, I think that they are used in just 3 general ways:
either you test if a value is positive or negative to launch different code conditionally, as in:
f x y | signum x == -1 = h x y
| otherwise = g x y
or you multiply something by 1
or -1
before operate with it, as in:
f x y = g x (y * b) where
b = signum x
or you add 1
or -1
to something before operate with it, as in:
f x y = g x (y + b) where
b = signum x
In all cases, it would be better a Bool
value for sign. So, we just need functions to decompose a Num
into absolute value and boolean sign, and an inverse function, which changes the sign of a value depending of a boolean condition (which represents a sign). This function is the equivalent of multiplying 1
or -1
by a number, so we define it as an operator similar to (*)
. :
sgn a = a >= 0
(*.) a True = a
(*.) a _ = -a
abs a = a *. sgn a
signum1 a = 1 *. sgn a
I added a dichotomous variant of signum
which can only return ´{-1,1}´. Notice that preceding it with signum 0 = 0
we would get the usual signum
function, but that third case is what I think that is not generally useful.
We can code similarly an adding operator, because it is a very frequent case to add 1
or -1
depending on the sign of something (you can see that these operators just treat True
as 1
and False
as -1
):
(+.) a b = a + 1 *. b
(-.) a b = a - 1 *. b
We could even enclose the declarations in a class called Signed
, for easier use, including proper signatures and fixity.
This way, the above general examples would simplify, not just in code, but also in execution time and space, because we avoid multiplication (using (*.)
instead), we avoid extra comparison once we have a Bool
, we can get the sign from one type of data and use it for another type without needing type conversion, and we use the short type Bool
instead of a potentially long type of class Num
. But we get more flexibility, while allowing some optimization of code and data types.
My question, then, is if there are cases different than the three general use cases exposed here, that is, cases not easily covered by this approach, cases for which the current signum
function is advantageous against a Bool sign approach. More precisely, can I avoid completely the use of the current signum
function without losing efficiency or code clarity?
Edit: I modified the first paragraph to a more "neutral" fashion, following Reid Barton comment.
Progress update: the code for this approach was greatly improved for simplicity and clarity, with the great help of the current answers and comments.
I've used a function like this to navigate a cursor to a target offset in a square grid by a series of moves to (orthogonally and diagonally) adjacent cells.
move :: (Int, Int) -> [(Int, Int)]
move (0, 0) = []
move (x, y) = (dx, dy) : move (x - dx, y - dy)
where dx = signum x
dy = signum y