Suppose that I have a class called Rational
which represents rational numbers "purely", i.e it maintains the representation of a/b as (a, b)
and implements the usual operators +, -, *, /
and others to work on those tuples, instead of evaluating the actual fractions on every operation.
Suppose now that I want to define what happens if I add a Rational
instance to an Int
, in addition to the already defined behavior for Rational
added to Rational
. Then, of course, I might end up wanting to add Rational
to Double
, or to Float
, BigInt
other numeric types...
+(Rational, _)
:def + (that:Rational):Rational = {
require(that != null, "Rational + Rational: Provided null argument.")
new Rational(this.numer * that.denom + that.numer * this.denom, this.denom * that.denom)
}
def + (that:Int): Rational = this + new Rational(that, 1) // Constructor takes (numer, denom) pair
def + (that:BigInt): Rational = ....
.
.
.
Any
:def + (that:Any):Rational = {
require(that != null, "+(Rational, Any): Provided null argument.")
that match {
case that:Rational => new Rational(this.numer * that.denom + that.numer * this.denom, this.denom * that.denom)
case that:Int | BigInt => new Rational(this.numer + that * this.denom, this.denom) // a /b + c = (a + cb)/b
case that:Double => ....
.
.
.
case _ => throw new UnsupportedOperationException("+(Rational, Any): Unsupported operand.")
}
}
One benefit I'm seeing from the pattern matching approach is saving in terms of actual source code lines, but perhaps with a decrease of readability. Perhaps more crucially, I have control over what I do when I'm provided with a type I haven't defined behavior of +
for. I'm not certain how that could be attained via the first approach, perhaps by adding an overloading for Any
underneath all the others? Either way, it sounds dangerous.
Ideas on whether one should opt for the first or second approach? Are there any safety issues I'm not seeing? Am I opening myself to ClassCastException
s or other kinds of exceptions?
The way to enforce a compile-time error is to ensure that the plus
method cannot actually take type Any
, via a type constraint, an implicit parameter, or the like.
One way of dealing with this would be to make use of the scala Numeric
type class. It should be perfectly possible to create an instance for Rational
, since you can easily implement all the required methods, and at that point you can define plus
as
def +[T: Numeric](that: T) : Rational
You'd now also be able to pull out the toInt
/toLong
/toFloat
/toDouble
methods of the implicit Numeric
argument to handle unknown classes instead of throwing a runtime error as well, if you wanted - and even if you don't, you've at least significantly cut down the erronous types that can be passed.
You could also define your own type class and appropriate instances of it for the types you want to support. Then you can either leave the addition logic in the +
method or move it into the typeclass instances:
trait CanBeAdded[T] {
def add(t: T, rational: Rational) : Rational
}
object CanBeAdded {
implicit val int = new CanBeAdded[Int] {
override def add(t: Int, rational: Rational): Rational = ???
}
implicit val long = new CanBeAdded[Long] {
override def add(t: Long, rational: Rational): Rational = ???
}
implicit val rational = new CanBeAdded[Rational] {
override def add(t: Rational, rational: Rational): Unit = ???
}
}
case class Rational(a: BigInt, b: BigInt) {
def +[T: CanBeAdded](that: T) = implicitly[CanBeAdded[T]].add(that, this)
}
I like the second option because I have to doubt that allowing your Rational
type to be added to any numeric type makes sense. You mention that you want +
to be able to take in Double
s, but exact representation combined with the rounding errors that often crop up in Double
s seems like it could lead to some very weird and counterintuitive behaviour with results that don't make much sense.