I am using Johnson's algorithm to detect negative cycles in a given graph. The problem I am having is that, when running Bellman-Ford on the virtual node, each update to weights are floating point arithmetic, which generally causes some precision error. And the graphs I am dealing with are on average 250k vertices and 500k edges, which means quite a lot of floating point arithmetic, and accumulated precision errors.
There were a lot of -0.000001 type of errors after I actually update the edges from the results of Bellman-Ford, and I am able to detect and fix them. But there were even some -1 weights accumulated from minor errors in the resulting graph. Can I fix this issue without deterioriating the performance greatly?
Well, the absolute easiest method is to use fixed-point (integer) weights. If scaling the weights such that the largest-magnitude weight is at 1^63 does not add unacceptable error to the smallest-magnitude weight, then do this.
Otherwise, using pairwise summation or Kahan summation to minimize error may suffice. In extreme cases you could use multiprecision floating point techniques to eliminate it entirely, but my intuition is that it would take a weird, huge graph (astronomically larger than yours) for that to become necessary.