Search code examples
c++templateseigeneigen3

Wrap Eigen's tensor class to create dynamic rank tensors?


Eigen's tensor module supports static rank tensors through the use of template arguments, with something like Eigen::Tensor<typename T,int dims>. I'm only going to be using Eigen::Tensor<double, n>, where n isn't necessarily known at compile time. Is there anyway to make a class like:

class TensorWrapper{
    Eigen::Tensor<double,??????> t; //not sure what could go here
    TensorWrapper(int dimensions){
        t = Eigen::Tensor<double,dimensions>(); //wouldn't work either way
    }
};

I know that Eigen::Tensor<double,2> and Eigen::Tensor<double,3> have absolutely nothing to do with each other because they are templated, and that I can't have template arguments determined at run-time, so the above would fail in every possible way. I also know that tensors in Tensorflow support this, but they don't have tensor contraction in c++ (which is why I need tensors in the first place). Is there any way to do what I wanted to above, even for a limited number of dimensions (I won't need more than 5 or 6)? If not, are there any tensor libraries for c++ that support dynamic rank and tensor contraction?


Solution

  • Hi I wrote this multidimensional tensor library (its not full fledged) it supports basic operations like dotproduct and pointwise elements.

    https://github.com/josephjaspers/BlackCat_Tensors

    Tensor<float> tensor3 = {3, 4, 5};   -- generates a 3 dimensional tensor (3 rows, 4 columns, 5 pages)
    Tensor<float> tensor5 = {1,2,3,4,5}; -- generate a 5d tensors (1 rows, 2 columns, 3 pages, etc)
    tensor3[1] = 3;                      -- returns the second matrix and sets all the values to 3. 
    tensor3[1][2];                       -- returns the second matrx and then then 3rd column of that matrix
    tensor3({1,2,3},{2,2});              -- at index 1,2,3, returns a sub-matrix of dimensions 2x2
    

    All of the accessor operators [] (int index) and ({initializer_list index},{initializer_list shape}) return seperate tensors but they all refer to the same internal array. Therefor you can modify the original tensor from these sub_tensors.

    All the data is allocated on a single array. If you want to use dotproduct you need to link it to BLAS. Here's the header file, it details most of the methods. https://github.com/josephjaspers/BlackCat_Tensors/blob/master/BC_Headers/Tensor.h

    However, it is not particularly fast, personally I use this personal library just for prototyping NeuralNetworks (https://github.com/josephjaspers/BlackCat_NeuralNetworks).