Search code examples
pythonnumpyscipysympymathcad

solving Ax =b with constraints for a non-square matrix A, using python


I solved this problem in Mathcad, but I don't know how to transfer it to Python.

Mathcad:

Solution in Mathcad:

I tried this code:

from sympy import symbols, nonlinsolve
Q = np.array([230.8084,119.1916,76.943,153.8654,196.1346])
x1, x2, x3, x4, x5 = symbols('x1, x2, x3, x4, x5', real=True)
eq1 =  (Q[0]**2)*x1 - (Q[1]**2)*x2 + (Q[2]**2)*x3
eq2 =  (Q[0]**2)*x1 + (Q[3]**2)*x4 - (Q[4]**2)*x5 - (Q[2]**2)*x3
eq3 =  (Q[2]**2)*x3 - (Q[3]**2)*x4 + (Q[4]**2)*x5
q = nonlinsolve([eq1,eq2,eq3 ], [x1, x2, x3, x4, x5])
print(q)

Result: {(0, 1.66644368166375x4 - 2.70780339743064x5, 3.99892914904867x4 - 6.49785771642014x5, x4, x5)}

it's fine if x4 and x5 are not found, but the values of each x should be > 0 and < 1. I do not know how to do this with e.g. numpy/scipy.


Solution

  • Here is a solution with scipy.optimize.minimize() that follows this answer to a related question and that tries to replicate your Mathcad solution:

    import numpy as np
    from scipy.optimize import minimize, Bounds
    
    # We only ever need the squared values of Q, so we square them directly
    q_sq = np.array([230.8084, 119.1916, 76.943, 153.8654, 196.1346]) ** 2
    
    # Define the function to be optimized
    def f(x):
    
        # Set up the system of equations ...
        rows = np.array([[q_sq[0], -q_sq[1], q_sq[2],        0,        0],
                         [q_sq[0], -q_sq[1],       0,  q_sq[3], -q_sq[4]],
                         [      0,        0, q_sq[2], -q_sq[3],  q_sq[4]]])
        y = rows @ x
        
        # ... the results of which should all be zero
        return y @ y
    
    lower_bds = 0.001 * np.ones(5)  # Constrain all x >= 0.001
    upper_bds = 0.999 * np.ones(5)  # Constrain all x <= 0.999
    initial_guess = lower_bds + 1e-6  # Initial guess slightly above lower bounds
    
    res = minimize(f, x0=initial_guess, bounds=Bounds(lb=lower_bds, ub=upper_bds))
    
    print(f"Converged? {res.success}")
    print(f"Result: {res.x}")
    # >>> Converged? True
    # >>> Result: [0.001      0.00416655 0.001      0.00193571 0.00103738]
    

    The result is pretty close to what you found in Mathcad, although the way to get there was admittedly a bit more tedious here. Note: just like your Mathcad solution but unlike your sympy solution, the approach with scipy.optimize.minimize() picks a single solution from the solution space.