I need to write a function using scipy.optimize.linprog to solve a 3x3 matrix to find Nash equilibrium.
The problem is defined as:
1- x_i is the probability to select a row.
2- The column payoff is the the product of x_i with the corresponding value under col_j.(For example, the payoff of col_1 = 0 * x_1 - 1 * x_2 + 1 * x_3)
This is a linear program problem and defined as below:
#The matrix to solve
col_1 col_2 col_3
x_1 [[0.0, 1.0, -1.0],
x_2 [-1.0, 0.0, 1.0],
x_3 [ 1.0, -1.0, 0.0]]
maximize rows payoff:
(0 + 1 -1)X_1 + (-1 + 0 + 1)X_2 + (1 - 1 + 0)X_3
subject to:
0-x_2+x_3=x_1+0-x_3 ---> -x_1-x_2+2x_3=0 #Payoff col_1 = Payoff col_2
0-x_2+x_3=-x_1+x_2+0 ---> x_1-2x_2+x_3=0 #Payoff col_1 = Payoff col_3
0<=x_1<=1 #Probability bounds of x_1
0<=x_2<=1 #Probability bounds of x_2
0<=x_3<=1 #Probability bounds of x_3
x_1+x_2+x_3=1 #Sum of probabilities of all rows
The solution should be: [0.33333, 0.33333, 0.33333]
but when I run my code I get the error below:
message: 'Optimization failed. Unable to find a feasible starting point.'
nit: 2
status: 2
success: False
x: nan
Below is my function and I don't know why it fails
def solve_Mixed_NE_LP(X):
num_of_rows = X.shape[0]
num_of_columns = X.shape[1]
c = np.sum(X, axis=1).T #objective to maximize
b_eq = np.array([])
A_eq = None
bounds = []
#Probabilities bounds: 0 <= x_i <= 1
for i in range(num_of_rows):
bounds.append((0.,1.))
#Total rows selection probabilities must sum to 1
b_eq = np.append(b_eq, np.array([1]))
A_eq = np.array([[1 for i in range(num_of_rows)]]).T
XT = X.T
for i in range(1,num_of_columns):
b_eq = np.append(b_eq, np.array([0]))
constraint = XT[0,:] - XT[i,:]
constraint = np.array([constraint]).T
A_eq = np.hstack((A_eq, constraint))
return optimize.linprog(c=c, A_ub=None, b_ub=None, A_eq=A_eq, b_eq=b_eq, bounds=bounds, method='simplex')
Your matrix A_eq and the vector b_eq are wrong. According to your optimization problem it should be:
In [21]: A_eq
Out[21]:
array([[-1, -1, 2],
[ 1, -2, 1],
[ 1, 1, 1]])
In [22]: b_eq
Out[22]: array([0., 0., 1.])
instead of
In [25]: A_eq
Out[25]:
array([[ 1., -1., 1.],
[ 1., -1., -2.],
[ 1., 2., 1.]])
In [26]: b_eq
Out[26]: array([1., 0., 0.])
Changing your function to
def solve_Mixed_NE_LP(X):
num_of_rows = X.shape[0]
num_of_columns = X.shape[1]
c = np.sum(X, axis=1).T #objective to maximize
#Probabilities bounds: 0 <= x_i <= 1
bounds = [(0,1) for i in range(num_of_rows)]
#Total rows selection probabilities must sum to 1
A_eq = np.zeros(X.shape)
b_eq = np.zeros(num_of_rows)
A_eq[-1,:] = np.ones(num_of_columns)
b_eq[-1] = 1
for i in range(num_of_rows-1):
A_eq[i, :] = X[:, 0] - X[:, i+1]
return optimize.linprog(c=c, A_ub=None, b_ub=None, A_eq=A_eq, b_eq=b_eq, bounds=bounds, method='simplex')
gives me:
con: array([0., 0., 0.])
fun: 0.0
message: 'Optimization terminated successfully.'
nit: 6
slack: array([], dtype=float64)
status: 0
success: True
x: array([0.33333333, 0.33333333, 0.33333333])