Search code examples
linear-algebranumerical-methodswolframalphapari-gp

How can I improve the lindep function's applicability in Pari/GP for integral approximations?


While doing certain computations involving the Rogers L-function, the following result was generated by Wolfram Alpha:

                               enter image description here

I wanted to verify this result in Pari/GP by means of the lindep function, so I calculated the integral to 20 digits in WA, yielding:

11.3879638800312828875

Then, I used the following code in Pari/GP:

lindep([zeta(2), zeta(3), 11.3879638800312828875])

As pi^2 = 6*zeta(2), one would expect the output to be a vector along the lines of:

[12,12,-3]

because that's the linear dependency suggested by WA's result. However, I got a very elaborate vector from Pari/GP:

[35237276454, -996904369, -4984618961]

I think the first vector should be the "right" output of the Pari code sample.

Questions:

  1. Why is the lindep function in Pari/GP not yielding the output one would expect in this case?
  2. What can I do to make it give the vector that would be more appropriate in this situation?

Solution

    1. It comes down to Pari treating your rounded values as exact. Since you must round your values, lindep's solution doesn't always come to the same solution as the true answer due to error.

    2. You can try changing the accuracy of lindep using the second argument. The manual states that you should choose this to be smaller than the number of correct decimal digits. I believe this should solve the issue.

    lindep(v, {flag = 0}) finds a small nontrivial integral linear combination between components of v. If none can be found return an empty vector.

    If v is a vector with real/complex entries we use a floating point (variable precision) LLL algorithm. If flag = 0 the accuracy is chosen internally using a crude heuristic. If flag > 0 the computation is done with an accuracy of flag decimal digits. To get meaningful results in the latter case, the parameter flag should be smaller than the number of correct decimal digits in the input.