I had my script running smoothly from command line, however, when I start it as a systemd.service, I get the following error:
iot_local.service - My iot_local Service
Loaded: loaded (/lib/systemd/system/iot_local.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-04-01 23:06:45 UTC; 5s ago
Process: 2436 ExecStart=/usr/bin/python /home/ubuntu/myTemp/iot_local.py (code=exited, status=1/FAILURE)
Main PID: 2436 (code=exited, status=1/FAILURE)
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 358, in resolve
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: service_name, region_name)
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: File "/usr/local/lib/python2.7/dist-packages/botocore/regions.py", line 122, in construct_endpoint
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: partition, service_name, region_name)
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: File "/usr/local/lib/python2.7/dist-packages/botocore/regions.py", line 135, in _endpoint_for_partition
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: raise NoRegionError()
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: botocore.exceptions.NoRegionError: You must specify a region.
Apr 01 23:06:45 ip-172-31-29-45 systemd[1]: iot_local.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 23:06:45 ip-172-31-29-45 systemd[1]: iot_local.service: Unit entered failed state.
Apr 01 23:06:45 ip-172-31-29-45 systemd[1]: iot_local.service: Failed with result 'exit-code'.
it seems to fail on this line:
DB=boto3.resourse('dynamodb')
If I add the region as an argument, the script still fails later because can not find credentials. So, when I provide region, id and a key as argument, everything works:
boto3.resource('dynamodb', region_name='us-west-2', aws_access_key_id=ACCESS_ID, aws_secret_access_key=ACCESS_KEY)
The obvious problem is that when this script is run as service, it fails to obtain the info from the ~/.aws/config
and ~/.aws/credentials
, which I made sure to contain all the necessary information by running aws configure
as mentioned here.
[default]
aws_access_key_id=XXXXXXXXXXXXXX
aws_secret_access_key=YYYYYYYYYYYYYYYYYYYYYYYYYYY
I also tried this:
export AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
and this
sudo chown root:root ~/.aws
but it did not help. Any ideas why .service does not "see" the credentials files?
When systemd runs your script as a service, the script is no longer being run by the ubuntu user so the home directory is no longer /home/ubuntu. That means that ~/.aws/credentials no longer refers to /home/ubuntu/.aws/credentials and your script is therefore trying to load credentials from the wrong place (probably /root/.aws/credentials).
You can configure systemd to run your script as a specific user. Add User=ubuntu
in the [Service]
section.