Search code examples
genericsintrusttraits

How do I use integer number literals when using generic types?


I wanted to implement a function computing the number of digits within any generic type of integer. Here is the code I came up with:

extern crate num;
use num::Integer;

fn int_length<T: Integer>(mut x: T) -> u8 {
    if x == 0 {
        return 1;
    }

    let mut length = 0u8;
    if x < 0 {
        length += 1;
        x = -x;
    }

    while x > 0 {
        x /= 10;
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}

And here is the compiler output

error[E0308]: mismatched types
 --> src/main.rs:5:13
  |
5 |     if x == 0 {
  |             ^ expected type parameter, found integral variable
  |
  = note: expected type `T`
             found type `{integer}`

error[E0308]: mismatched types
  --> src/main.rs:10:12
   |
10 |     if x < 0 {
   |            ^ expected type parameter, found integral variable
   |
   = note: expected type `T`
              found type `{integer}`

error: cannot apply unary operator `-` to type `T`
  --> src/main.rs:12:13
   |
12 |         x = -x;
   |             ^^

error[E0308]: mismatched types
  --> src/main.rs:15:15
   |
15 |     while x > 0 {
   |               ^ expected type parameter, found integral variable
   |
   = note: expected type `T`
              found type `{integer}`

error[E0368]: binary assignment operation `/=` cannot be applied to type `T`
  --> src/main.rs:16:9
   |
16 |         x /= 10;
   |         ^ cannot use `/=` on type `T`

I understand that the problem comes from my use of constants within the function, but I don't understand why the trait specification as Integer doesn't solve this.

The documentation for Integer says it implements the PartialOrd, etc. traits with Self (which I assume refers to Integer). By using integer constants which also implement the Integer trait, aren't the operations defined, and shouldn't the compiler compile without errors?

I tried suffixing my constants with i32, but the error message is the same, replacing _ with i32.


Solution

  • Many things are going wrong here:

    1. As Shepmaster says, 0 and 1 cannot be converted to everything implementing Integer. Use Zero::zero and One::one instead.
    2. 10 can definitely not be converted to anything implementing Integer, you need to use NumCast for that
    3. a /= b is not sugar for a = a / b but an separate trait that Integer does not require.
    4. -x is an unary operation which is not part of Integer but requires the Neg trait (since it only makes sense for signed types).

    Here's an implementation. Note that you need a bound on Neg, to make sure that it results in the same type as T

    extern crate num;
    
    use num::{Integer, NumCast};
    use std::ops::Neg;
    
    fn int_length<T>(mut x: T) -> u8
    where
        T: Integer + Neg<Output = T> + NumCast,
    {
        if x == T::zero() {
            return 1;
        }
    
        let mut length = 0;
        if x < T::zero() {
            length += 1;
            x = -x;
        }
    
        while x > T::zero() {
            x = x / NumCast::from(10).unwrap();
            length += 1;
        }
    
        length
    }
    
    fn main() {
        println!("{}", int_length(45));
        println!("{}", int_length(-45));
    }