My intention is to have my static website files (in React, if that's a factor) accessible only via my domain and not directly through S3 URLs. It seems to be working on my own computer (though that might be CloudFront cache from when the bucket was public), but other clients receive only S3 messages in XML. Requesting the domain without any path gives a response. Requesting any path (e.g. /index.html, a file in my bucket) gives a response with the code NoSuchKey.
What am I doing wrong? Here's the current configuration.
Edit: my bucket policy (do I need to add another action?)
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EZOBXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::subdomain.mydomain.com/*"
}
]
}
Two changes got this working nicely. Thanks to Michael for his help.
sync
command in the AWS command line interface (CLI). In package.json scripts: "deploy": "aws s3 sync build/ s3://subdomain.mydomain.com --delete --grants read=id=S3CANONICALIDOFORIGINACCESSIDENTITY"
~~/index.html
, then feeling confused when my app didn't seem to update. I think this was because, although index.html
is always served to users, it's never explicitly requested. My routes looked lik /
or /dashboard
, which were not being invalidated. Now, I clear the CloudFront cache by invalidating /*
.Edit: It's been a few years, but I don't think #1 is necessary. I've set up several distributions since then with only the bucket policy defining permissions.