Search code examples
pythonimagematlabdeep-learningtopography

Topography height prediction from 2D image


I would like to train 2 D images with the corresponding pixel heigh topography information. I have a bunch of 2 D images taken from a topography where the height of each pixel is also known. Is there any way that I can use deep learning to train the images with height pixel information?

I have already tried to infer some features from the images and pixel heights and relate them by regression method such as SVM, but I did not get satisfactory results yet for predicting new image pixel height features.


Solution

  • How about using the pixel height values as labels, and the images (RGB I assume, so 3 channels) as training set. Then you can just run supervised learning. Although I am not sure how you could recover height by just looking at an image, even humans would have trouble doing that even after seeing many images. I think you would need some kind of reference point.

    To convert an image into a 3D array of values (3rd dimension are the color channels):

    from keras.preprocessing import image
    
    # loads RGB image as PIL.Image.Image type
    img = image.load_img(img_file_path, target_size=(120, 120))
    # convert PIL.Image.Image type to 3D tensor with shape (120, 120, 3)
    x = image.img_to_array(img)
    

    There are a number of other ways too: Convert an image to 2D array in python

    In terms of assigning labels to images (here labels are the pixel heights), it would be as simple as creating your training set x_train (nb_images, 120, 120, 3) and labels y_train (nb_images, 120, 120, 1) and running supervised learning on these until for each image in x_train the model can predict each corresponding value in the height set y_train within a certain error.