Disclaimer: I came to Scala from C#, where I really appreciated LINQ. Therefore, I immediately felt at home with iterators and sequences. I missed yield "C# style", but I was able to cook my own with continuations... even if it pays a performance penalty.
Now, when I miss some method over collections in C#, I just define it as an extension method, and the compiler does a very nice job of treating code efficiently.
In Scala, I use the Pimp enrich My Library approach, but I am a little bit worried about performances.
Contrary to my "yield iterator", however, this is a recognized and common patter. Does the Scala compiler optimize it, removing the creation of the temporary object?
class RichFoo(f: Foo) {
def baz = f.bar()
def baz2 = f.bar() * 2
}
object RichFoo {
implicit def foo2Rich(f: Foo) = new RichFoo(f)
}
// on the caller side
val f : Foo = ....
f.baz
f.baz2
// this translates, literally, to new RichFoo(f).baz, new RichFoo(f).baz2
If not, why? It looks like a good and safe optimization to me. Can I "hint" or "force" the compiler in the right direction? Which faster alternatives are there?
I would like to use the pattern for my collection of algorithms over iterators/iterable, so I can write them as filter/map/etc collection.baz(lambda).bar(lambda2)
but I am afraid it will prove to be too "heavy". (Compared to the more efficient/direct, but ugly bar(lambda2, baz(lambda, collection)
)
As @om-nom-nom comments, the solution here (in 2.10) is to use an implicit value class.
implicit class RichFoo(val f : Foo) extends AnyVal {
def baz = f.bar()
def bax = f.bar()
}
RichFoo
exists now at compile time, but at runtime this is optimized into a static method call, and so should impose no performance penalty.
See also Mark Harrah's Introduction to Value Classes which gives a good overview from the usage perspective.