I am trying to implement a simple model estimation in Python. I have an ARCH model:
logR_t = u + theta_1 * logR_t + \epsilon_t
where logR_t are my log-returns vector, u and theta_1 are the two parameters to be estimated and \epsilon_t are my residuals.
In Matlab, I have the following lines to call the optimiser on the function Error_ARCH. The initial guess for the parameters is 1, their lower bounds are -10 and upper bounds are 10.
ARCH.param = lsqnonlin( @(param) Error_ARCH(param, logR), [1 1], [-10 -10], [10 10]);
[ARCH.Error, ARCH.Residuals] = Error_ARCH( ARCH.param, logR);
Where the error to minimise is given as:
function [error, residuals] = Error_ARCH(param, logreturns)
% Initialisation
y_hat = zeros(length(logreturns), 1 );
% Parameters
u = param(1);
theta1 = param(2);
% Define model
ARCH =@(z) u + theta1.*z;
for i = 2:length(logreturns)
y_hat(i) = ARCH( logreturns(i-1) );
end
error = abs( logreturns - y_hat );
residuals = logreturns - y_hat;
end
I would like a similar thing in Python but I am stuck since I do not know where to specify the arguments to the least_squares function in SciPy. So far I have:
from scipy.optimize import least_squares
def model(param, z):
"""This is the model we try to estimate equation"""
u = param[0]
theta1 = param[1]
return u + theta1*z
def residuals_ARCH(param, z):
return z - model(param, z)
When I call the lsq optimisizer, I get an error: residuals_ARCH() missing 1 required positional argument: 'z'
guess = [1, 1]
result = least_squares(residuals_ARCH, x0=guess, verbose=1, bounds=(-10, 10))
Thank you for all your help
The least_squares
method expects a function with signature fun(x, *args, **kwargs)
. Hence, you can use a lambda expression similar to your Matlab function handle:
# logR = your log-returns vector
result = least_squares(lambda param: residuals_ARCH(param, logR), x0=guess, verbose=1, bounds=(-10, 10))