So I am looking into building a robot that can respond to a command, and go to a location (For example: Go to the living room)
I am currently working out the pathfinding aspect of it. Say I already have a floor plan... (I can tackle that another time)
What would be the approach to this, because I already know how to program a simple A* pathfinding algorithm in python which can get from point A to B avoiding obstacles, but I need my robot to be able to maneuver in a more advanced manor than just a simple grid. I was thinking that it would be possible to treat the robot as (for example) a 16x16 part of a bigger, say 100x100 grid, but I am not sure if that is even possible or ideal. Or maybe allowing for percentages of 1 grid pixel movement, for example moving 0.5 left on a 10x10 grid.
I am mainly just looking for a place to start programing the virtual aspect of this machine.
(No need to worry about actual positioning in the real world yet, I'm currently just looking to learn how to do it on a digital scale.)
you never mentioned what sensors the robot will have but the most sensible ones are 360 cameras (even the ones that heavily distort the image should do) or LIDAR. In either case, what you're looking for is called SLAM (simultaneous Localization and Mapping). There are already tons of examples using different libraries (ROS, OpenCV, etc.) that allow you to do this.
Edit: any hard-coded solution where you provide a grid or static map of the room will fail since the real world is much more challenging than just writing a path-finding algorithm, you will need to adapt to the situation and prevent error build-up from the sensors (localization) while you adapt to dynamic objects like people, pets, etc. (mapping)).