Search code examples
javaminioaws-sdk-java-2.0

AWS Java SDK 2 (putObject) + MinIO: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256


I have run MinIO in a docker container, created a bucket (test), made the bucket as public, created access key, set a server location (us-east-1).

After that I build a client:

final S3Client client = S3Client.builder()
                .endpointOverride(URI.create("http://localhost:9000"))
                .credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create("assess_key", "secret_key")))
                .region(Region.US_EAST_1)
                .build();

And try to put an object into the bucket:

PutObjectRequest request = PutObjectRequest.builder()
                .bucket("test")
                .key(id.toString())
                .build();

client.putObject(request, RequestBody.fromFile(filePath));

And I have got the exception:

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Service: S3, Status Code: 400...

How can I fix it with AWS Java SDK 2?

P.S. I can list buckets using the client.


Solution

  • You should use forcePathStyle(true) when creating the client. So your example should read:

    final S3Client client = S3Client.builder()
                    .endpointOverride(URI.create("http://localhost:9000"))
                    .credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create("assess_key", "secret_key")))
                    .region(Region.US_EAST_1)
                    .forcePathStyle(true)
                    .build();
    

    If you don't specify it, then the AWS SDK will use the bucket-name in the hostname, so it should try to go to http://test.localhost:9000/ instead of http://localhost:9000/test/. But I would have expected another error message (something like test.localhost cannot be resolved).

    I tried this myself and with the latest AWS SDK for Java (2.25.49) I was able to upload an object to MinIO without any issues. Listing buckets probably worked, because it doesn't involve a bucket-name and will be sent to the endpoint without any issues.

    Explanation

    AWS S3 used fixed endpoints for the S3 service when they started S3 in 2006. When addressing a bucket, then you address https://<endpoint>/<bucket> (path-style), but this requires the endpoint to forward traffic to the cluster that handles that specific bucket. To make it more efficient, AWS switched to DNS-based bucket addressing, so you refer to https://<bucket>.<endpoint>/ (virtual-hosted) instead. The AWS DNS resolver can already direct the client to the proper cluster and make it more efficient. More information can be found here.

    AWS handles all the DNS infrastructure and ensures that each virtual-host is pointing to the proper endpoint. With MinIO, customers need to manage their own DNS, so using the virtual-hosted style would require that the DNS is updated when a new bucket is added (you probably even need to provide DNS resolving for the subdomain that is created by prepending the bucket-name). That's hard, so MinIO sticks to path-style URLs.

    AWS pushes to virtual-hosted style, but (fortunately) still allows enabling the path-style using forcePathStyle(true). If you don't explicitly enable path-style, then the AWS SDK for Java will prepend the bucketname in front of the end-point, so it may address https://bucket.minio.example.com/ instead of https://minio.example.com/bucket/. I would expect that your DNS wouldn't resolve it and you should get an error that bucket.minio.example.com can't be resolved. But maybe your DNS did resolve it (maybe due to wildcards) to an address and it failed authentication.