I'm interested to develop an AR app using a Tango Project cellphone that would be able to identify body parts, like legs, and project some 3D objects on the identified parts. My concern now is if the sensors in Project Tango cellphones are ideal for this and what features I should be starting with. I appreciate any guide.
Thanks in advance
There are some research papers on human body segmentation / skeleton tracking using RGB image + depth map, some only use the RGB image.
I recommend you this tutorial on Human Body Recognion and Tracking which includes the following interesting reference:
Real-Time Human Pose Recognition in Parts from a Single Depth Image, J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2011
These works were performs with Kinect-like sensors, which have a higher resolution depth map than the Google Tango devices, so unfortunately I cannot tell for certain the results will be as good with a Google Tango device.