I'm creating a collision detection system in C++ using the glm
library.
I have an array of vertices defined as std::vector<glm::vec3> vertices
and a function to calculate the maxX, y, z defined as
GLfloat AssetInstance::maxX()
{
GLfloat max = vertices.at(0).x;
for(glm::vec3 vertex : vertices)
if(vertex.x > max) max = vertex.x;
return max;
}
but if I run the following code:
std::vector<glm::vec3> testVector;
testVector.push_back(glm::vec3(3.0500346, 1.0, 1.0));
testVector.push_back(glm::vec3(3.0500344, 2.0, 2.0));
testVector.push_back(glm::vec3(3.0500343, 3.0, 3.0));
std::cout << maxX(testVector) << std::endl;
the output is 3.05003
I thought that glm::vec3
was double
and that double
was more precise than that?
Is there a better way to do this? My maxX isn't returning precise enough results.
Try setprecision. By default its 6 which is what youre getting.
#include <iostream>
#include <iomanip>
#include <cmath>
#include <limits>
int main()
{
std::cout << "default precision (6): " << 3.0500346 << '\n';
std::cout << "std::precision(10): " << std::setprecision(10) << 3.0500346 << '\n';
return 0;
}
Output (Coliru link):
clang++ -std=c++14 -O2 -Wall -pedantic -pthread main.cpp && ./a.out
default precision (6): 3.05003
std::precision(10): 3.0500346
On another note, you're returning a GLFloat which is well a float, not a double, so no matter what glm uses, you're converting it to a float. So ultimately, calling maxX gives you float precision, not double.
P.S: Looking at the docs it looks like there is a dvec type which makes me doubt that vec uses double by default.