so I am working on an expression evaluator as an internal component on a work related project. but I am having some weird behavior when it comes to the output of floating point math...
the evaluator takes in a string
e.evaluate("99989.3 + 2346.4");
//should be 102335.7
//result is 102336
//this function is what returns the result as a string
template <class TYPE> std::string Str( const TYPE & t ) {
//at this point t is equal to 102335.7
std::ostringstream os;
os << t;
// at this point os.str() == 102336
return os.str();
it appears almost as if any floating point number above e+004 scientific notation is being rounded to the nearest whole number. can anyone explain why this is happening and how I might overcome this issue.
You can set the precision with std::setprecision.
With a bit of help from std::fixed