I have an object with some sensors on it with a known 3d location in a fixed orientation in relation to each other. Let's call this object the "detector". I have a few of these sensors' detected locations in 3D world space. Problem: how do I get an estimated pose (position and rotation) of the "detector" in 3D world space.
I tried looking into the npn problem, flann and orb matching and knn for the outliers, but it seems like they all expect a camera position of some sort. I have nothing to do with a camera and all I want is the pose of the "detector". considering that opencv is a "vision" library, do I even need opencv for this?
edit: Not all sensors might be detectec. Here indicated by the light-green dots.
You definitely do not need openCV to estimate the position of your object in space.
This is a simple optimization problem where you need to minimize a distance to the model.
First, you need to create a model of your object's attitude in space.
def Detector([x, y, z], [alpha, beta, gamma]):
which should return a list or array of all positions of your points with IDs in 3D space. You can even create a class for each of these sensor points, and a class for the whole object which has as attributes as many sensors as you have on your object.
Then you need to build an optimization algorithm for fitting your model on the detected data. The algorithm should use the attitude x, y, z, alpha, beta, gamma as variables.
for the objective function, you can use something like a sum of distances to corresponding IDs.
Let's say you have a 3 points object that you want to fit on 3 data points
#Model
m1 = [x1, y1, z1]
m2 = [x2, y2, z2]
m3 = [x3, y3, z3]
#Data
p1 = [xp1, yp1, zp1]
p2 = [xp2, yp2, zp2]
p3 = [xp3, yp3, zp3]
import numpy as np
def distanceL2(pt1, pt2):
distance = np.sqrt((pt1[0]-pt2[0])**2 + (pt1[1]-pt2[1])**2 + (pt1[2]-pt2[2])**2))
# You already know you want to relate "1"s, "2"s and "3"s
obj_function = distance(m1, p1) + distance(m2,p2) + distance(m3,p3)
Now you need to dig into optimization libraries for finding the best algorithm to use, depending on how fast you need you optimization to be. Since your points in space are virtually connected, this should not be too difficult. scipy.optimize could do it.
To reduce the dimensions of your problem, Try taking one of the 'detected' points as reference (as if this measure was to be trusted) and then find the minimum of the obj_function for this position (there are only 3 parameters left to optimize on, corresponding to orientation) and then iterate for each of the points you have. Once you have the optimum, you can try to look for a better position for this sensor around it and see if you reduce again the distance.