I am trying to calculate the velocity of an object based on vectors of its X and Y coordinates. Originally, I used both component velocities then used the Pythagorean theorem to add them together. mdcx
and mdcy
are vectors of the x and y coordinates, respectively.
for i=2:length(mdcx)
xdif(i)=mdcx(i-1)-mdcx(i);
end
xvel=(xdif/(1/60));
for i=2:length(mdcy)
ydif(i)=mdcy(i-1)-mdcy(i);
end
yvel=(ydif/(1/60));
v=hypot(xvel,yvel);
A friend mentioned how stupid this was, and I realized that there was a much nicer way of doing it:
d = hypot(mdcx,mdcy);
for i = 2:length(d)
v(i,1) = d(i)-d(i-1);
end
v = v/(1/60);
This is all well and good, except for the two methods get different answers and I cannot figure out why. An example of the results from method no. 1 are:
and the equivalent section from method no. 2:
My Question
What am I doing wrong here? Why aren't these coming up with the same results? It's probably a stupid mistake, but I can't seem to figure out where it's coming from. Am I using hypot
correctly?
Thanks in advance!
The correct method is the first. Velocity is a vector, so you have to compute its x, y components and then obtain the magnitude of that vector.
With the second method you are subtracting magnitudes of distances, and that's not correct. For example, in a circular movement around the origin of coordinates that would give you zero velocity, which is wrong.
To sum up: you are dealing with vectors. Do the vector subtraction, and only at the end you take the magnitude. Magnitude of difference is not the same as difference of magnitudes.
By the way, you can vectorize the first method using diff
(note this will not give an initial zero in the result as your method does):
v = hypot(diff(mdcx), diff(mdcy))*60;