Search code examples
augmented-realitygoogle-project-tangoindoor-positioning-system

How does Google implement visual positioning services?


The VPS(visual positioning services) impressed me a lot. I know visual position method based on AR-Markers, however it is hard to do the visual positioning in the open environment (without known markers)? I guess they may use sensors in the smart phone to get difference of world coordinates, which may be used in the calculation.

Does anyone know how Google do the indoor positioning in the open environment? Thanks.


Solution

  • VPS is in closed beta right now so finding out the specifics would probably be a breach of non-disclosure agreements.

    However Google Tango's current development stack for determining absolute positioning in Euclidean space is achieved with three core software/hardware technologies.

    1. Motion Tracking (achieved through a wide angle "fisheye" monochrome camera in conjunction with the IMU, Gyroscopes, magnetometers and accelerometers within the device.
    2. Depth Perception (achieved through a "time of flight" InfraRed emitter and receiver which creates a dense point cloud of depth measurements).
    3. Area Learning (achieved through the RGB camera in conjunction with the Fisheye and IR sensors, which 'maps' areas in the point cloud and 'remembers' their location)

    The main focus here is that Tango isn't just a software stack, it is hardware dependant too. You can only develop Tango software on a Tango enabled device such as the Lenovo Phab 2 Pro.

    You could always sign up for the Tango VPS closed beta and find out more that way?