Search code examples
machine-learningneural-networkpoint-clouds

Feature Vectors in Radial Basis Function Network


I am trying to use RBFNN for point cloud to surface reconstruction but I couldn't understand what would be my feature vectors in RBFNN.

Can any one please help me to understand this one.

A goal to get to this: http://www.creatis.insa-lyon.fr/site/sites/default/files/resize/bunny5-200x200.jpg

From inputs like this: enter image description here


Solution

  • An RBF network essentially involves fitting data with a linear combination of functions that obey a set of core properties -- chief among these is radial symmetry. The parameters of each of these functions is learned by incremental adjustment based on errors generated through repeated presentation of inputs.

    If I understand (it's been a very long time since I used one of these networks), your question pertains to preprocessing of the data in the point cloud. I believe that each of the points in your point cloud should serve as one input. If I understand properly, the features are your three dimensions, and as such each point can already be considered a "feature vector."

    You have other choices that remain, namely the number of radial basis neurons in your hidden layer, and the radial basis functions to use (a Gaussian is a popular first choice). The training of the network and the surface reconstruction can be done in a number of ways but I believe this is beyond the scope of the question.

    I don't know if it will help, but here's a simple python implementation of an RBF network performing function approximation, with one-dimensional inputs:

    import numpy as np
    import matplotlib.pyplot as plt
    
    def fit_me(x):
        return (x-2) * (2*x+1) / (1+x**2)
    
    def rbf(x, mu, sigma=1.5):
        return np.exp( -(x-mu)**2 / (2*sigma**2));
    
    # Core parameters including number of training
    # and testing points, minimum and maximum x values
    # for training and testing points, and the number
    # of rbf (hidden) nodes to use
    num_points = 100    # number of inputs (each 1D)
    num_rbfs = 20.0     # number of centers
    x_min = -5
    x_max = 10
    
    # Training data, evenly spaced points
    x_train = np.linspace(x_min, x_max, num_points)
    y_train = fit_me(x_train)
    
    # Testing data, more evenly spaced points
    x_test  = np.linspace(x_min, x_max, num_points*3)
    y_test  = fit_me(x_test)
    
    # Centers of each of the rbf nodes
    centers = np.linspace(-5, 10, num_rbfs)
    
    # Everything is in place to train the network
    # and attempt to approximate the function 'fit_me'.
    
    # Start by creating a matrix G in which each row
    # corresponds to an x value within the domain and each 
    # column i contains the values of rbf_i(x).
    center_cols, x_rows = np.meshgrid(centers, x_train)
    G = rbf(center_cols, x_rows)
    
    plt.plot(G)
    plt.title('Radial Basis Functions')
    plt.show()
    
    # Simple training in this case: use pseudoinverse to get weights
    weights = np.dot(np.linalg.pinv(G), y_train)
    
    # To test, create meshgrid for test points
    center_cols, x_rows = np.meshgrid(centers, x_test)
    G_test = rbf(center_cols, x_rows)
    
    # apply weights to G_test
    y_predict = np.dot(G_test, weights)
    
    plt.plot(y_predict)
    plt.title('Predicted function')
    plt.show()
    
    error = y_predict - y_test
    
    plt.plot(error)
    plt.title('Function approximation error')
    plt.show()
    

    First, you can explore the way in which inputs are provided to the network and how the RBF nodes are used. This should extend to 2D inputs in a straightforward way, though training may get a bit more involved.

    To do proper surface reconstruction you'll likely need a representation of the surface that is altogether different than the representation of the function that's learned here. Not sure how to take this last step.