Search code examples
c++11templatestemplate-meta-programminginteger-overflowtemplate-argument-deduction

derivation of return type based on max range of input possible in C++


I was recently asked this question in an interview of C++ where I was asked to improve the below piece of code which fails when adding two int's results in the result being long and return type needs accordingly to be derived.

Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.

How can this be generalized any hints or clues?

I feel that decltype() won't be helpful here.

#include<iostream>
#include<string>
#include<climits>

using namespace std;

template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{  
  return(i1+i2);
}

int main(int argc, char* argv[])
{
  cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
  cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
  return(0);   
}

Solution

  • Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.

    Not so simple.

    If T is int, you're non sure that long is enough.

    The standard say only that

    1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16

    2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32

    3) sizeof(int) <= sizeof(long)

    So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and

    adder<long>(INT_MAX-10, INT_MAX-3);
    

    doesn't works because long can be not enough to contain (without overflow) the sum between two int's.

    I don't see a simple and elegant solution.

    The best that come in my mind is based on the fact that C++11 introduced the following types

    1) std::int_least8_t, smallest integer type with at least 8 bits

    2) std::int_least16_t, smallest integer type with at least 16 bits

    3) std::int_least32_t, smallest integer type with at least 32 bits

    4) std::int_least64_t, smallest integer type with at least 64 bits

    C++11 also introduce std::intmax_t as the maximum width integer type.

    So I propose the following template type selector

    template <std::size_t N, typename = std::true_type>
    struct typeFor;
    
    /* in case std::intmax_t is bigger than 64 bits */
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool,
       (N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
     { using type = std::intmax_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
     { using type = std::int_least64_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
     { using type = std::int_least32_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
     { using type = std::int_least16_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
     { using type = std::int_least8_t; };
    

    that, given a number of bits, define the corresponding smallest "at least" integer type.

    I propose also the following using

    template <typename T>
    using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;
    

    that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).

    So your adder() simply become

    template<typename T>
    typeNext<T> adder (T const & i1, T const & i2)
     { return {typeNext<T>{i1} + i2}; }
    

    Observe that th returned value isn't simply

       return i1 + i2;
    

    otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.

    To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.

    The following is a full compiling example

    #include <cstdint>
    #include <climits>
    #include <iostream>
    #include <type_traits>
    
    template <std::size_t N, typename = std::true_type>
    struct typeFor;
    
    /* in case std::intmax_t is bigger than 64 bits */
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool,
       (N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
     { using type = std::intmax_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
     { using type = std::int_least64_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
     { using type = std::int_least32_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
     { using type = std::int_least16_t; };
    
    template <std::size_t N>
    struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
     { using type = std::int_least8_t; };
    
    template <typename T>
    using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;
    
    template<typename T>
    typeNext<T> adder (T const & i1, T const & i2)
     { return {typeNext<T>{i1} + i2}; }
    
    int main()
     {
       auto x = adder(INT_MAX-10, INT_MAX-3);
    
       std::cout << "int:  " << sizeof(int)*CHAR_BIT << std::endl;
       std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
       std::cout << "x:    " << sizeof(x)*CHAR_BIT << std::endl;
    
       std::cout << std::is_same<long, decltype(x)>::value << std::endl;
     }
    

    In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.

    But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.

    Observe also that trying to get a type for the sum of two std::intmax_t's

     std::intmax_t  y {};
    
     auto z = adder(y, y);
    

    gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.