I'm attempting to train a GAN on a 3D model of a chair with TensorFlow. The purpose is for a GAN model to have full context of a chair and there after be able to generate images with the chair based on the 3D model.
What I have been doing is reading a 3D model into python and rotating the model around it's (x,y,z) axis' and storing the images(to be used for training). The result is 1.3 Million images of a chair from every angle. I'm wondering if there is a better approach to this instead of generating million images for a single 3D model.
It would be much more efficient to have the GAN learn a 3D model and then generate images with the learned model of the chair in a realistic scene.
Python code i'm using for rotating the 3D model and saving the images
from stl import mesh
from mpl_toolkits import mplot3d
from matplotlib import pyplot
stl_mesh = mesh.Mesh.from_file('./chair.stl')
def generate_save_figure(elev,azim,dist):
figure = pyplot.figure(figsize=(1,1))
axes = mplot3d.Axes3D(figure)
axes.grid(False)
axes._axis3don=False
axes.add_collection3d(mplot3d.art3d.Poly3DCollection(stl_mesh.vectors))
scale = stl_mesh.points.flatten(-1)
axes.auto_scale_xyz(scale, scale, scale)
axes.view_init(elev=elev,azim=azim)
axes.dist = dist
axes.autoscale(True)
figure.savefig('./numpy-stl-images/elev({})-azim({})-dist({}).png'.format(elev,azim,dist))
print('saved elev {}, azim {}, dist {}'.format(elev,azim,dist))
del figure,axes,scale
pyplot.close('all')
for elev in range(0,180,1):
for azim in range(0,360,1):
for dist in range(5,25,1):
generate_save_figure(elev,azim,dist)
link to the github repo i'm working on for additional context of this question(note that the chair dataset is not yet available) https://github.com/RauxaDataScience/GansContextDataSets
https://www.tensorflow.org/graphics/ is the answer to training an ML model with 3D data