Whenever I run a workflow on Circleci this fails, giving me a 500 HTTP error code, despite being tested and successful when I run it locally.
When I run the test locally, I fetch the image from my storage and use it in order to do my HTTP request, but since I am using Circleci, I am fetching an image using curl, put this image into a folder, and then I get it in order to do my HTTP request, however, this always fails when trying to build on Circleci.
I am wondering if I am doing something wrong on storing the image incorrectly through curl, and pointing at something that eventually is not in the right place, or might be something else. Even though a 500 HTTP error request sounds like I am having issues with my API, I can confirm that when this runs locally, I do not get the HTTP 500 error code, as it gives back: Time: 6.85 seconds, Memory: 28.00MB OK (5 tests, 8 assertions).
I will post below my config.yaml and the dummy test function.
class TestDummys extends TestCase
{
private static $hostId;
private static $access_token = '';
private static $user;
private static $charityId;
public function testDummy()
{
self::$hostId = HostGroup::first()->id;
self::$access_token = auth()->login(User::first());
$path = storage_path('testimage.png');
$name = 'testimage.png';
$file = new UploadedFile($path, $name, 'image/png', null, null, true);
$response = $this->withHeaders([
'Authorization' => 'Bearer ' . self::$access_token,
])->json('POST', '/host/' . self::$hostId .'/charity/external', [
'name' => 'Charity',
'contact' => 'foo@gmail.com',
'registration_number' => '12345',
'account_number' => '12345',
'sort_code' => '12345',
'country_code' => 'GB',
'iban' => '124535',
'image' => $file
]);
$response->assertStatus(200);
}
}
Config.yaml
version: 2
jobs:
build:
docker:
# Specify the version you desire here
- image: circleci/php:7.3.3
- image: circleci/python:3.7.3
steps:
# Install pip
- run: sudo apt install python-pip
# Install aws-cli
- run:
name: Install aws-cli
command: sudo pip install awscli
# Install sam-cli
- run:
name: Install sam-cli
command: sudo pip install aws-sam-cli
- checkout
- run: sudo apt update # PHP CircleCI 2.0 Configuration File# PHP CircleCI 2.0 Configuration File sudo apt install zlib1g-dev libsqlite3-dev
- run: sudo apt-get update
- run: sudo apt-get install -y libjpeg62-turbo-dev libpng-dev libfreetype6-dev
- run: sudo docker-php-ext-install zip pdo mysqli pdo_mysql mbstring tokenizer ctype json bcmath gd
- run: sudo docker-php-ext-enable pdo_mysql
# Download and cache dependencies
- restore_cache:
keys:
# "composer.lock" can be used if it is committed to the repo
- v1-dependencies-{{ checksum "composer.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: composer install -n --prefer-dist
- save_cache:
key: composer-v1-{{ checksum "composer.json" }}
paths:
- ./vendor
# prepare the database
- run: touch /tmp/testing.sqlite
- run: php artisan migrate --database=sqlite --force
- run: curl https://d3qyaps1yzzqpv.cloudfront.net/images/eb_1554715247_2158207.png -o /tmp/testimage.png
# run tests with phpunit or codecept
- run: ./vendor/bin/phpunit
# delete test database
- run: sudo rm /tmp/testing.sqlite
# set environment variables to .env
- run: ....
- run: ....
- run: ....
- run: ....
- run: ....
- run: ....
- run: ....
- run: ....
# commit to package
- run: composer install --optimize-autoloader --no-dev
- run: sudo php artisan cache:clear
- run: sudo php artisan view:clear
- run: sudo php artisan config:clear
- run: sudo php artisan route:clear
- run: sam package --output-template-file .stack.yaml --s3-bucket ticketpass-api
- run: sam deploy --template-file .stack.yaml --capabilities CAPABILITY_IAM --stack-name ticketpass-api
Initially I spotted through a damp of the error something related to Redis server (which I am using), for some reason Redis server was not running by default, therefore I had to start it (updating the config.yaml file). Afterwards I was getting a different type of error related to the region of my S3 bucket. As through the post request, the image was going to be stored in the S3 bucket, this was complaining about its region. I managed to fix this by creating a new S3 bucket of the same region of where the request was coming from. This fixed my issue.