Search code examples
amazon-web-servicesamazon-s3apache-sparkemrapache-zeppelin

Permission Denied Access S3 file from Zeppelin installed on EMR


launched a cluster on EMR

with setting:

user: AdministratorPolicy (access all)
keypairs: yes
sandbox: Zeppelin
Application: Spark 1.5.0, Hadoop 2.6.0
IAM: defaultEMRRole
Bootstrap Action: no
IAM users: all
steps: no

then I get Zeppelin UI on my local machine with address:

instance-public-dns:8890

successed

create a new notebook: run

sc

return

res42: org.apache.spark.SparkContext =org.apache.spark.SparkContext@523b1d4c

then I try to load data into spark from S3

sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId","++")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey","++")
var textFile = sc.textFile("s3n://<instance>/<bucket-name>/pagecounts-20081001-070000")
textFile.first()

then get error

com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: FD784A9D96A0D54A), S3 Extended Request ID: oOgHwbN8tW2TIxpgagPIZ+NpsTmymzh6wiJ2a6zYhD8XeiH3pHVKpTOeYXOS0dzgBGqKsjr+ls8=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)

Solution

  • You should not need to set "fs.s3n.awsAccessKeyId" or "fs.s3n.awsSecretAccessKey". Can you try not setting those and then just use "s3" instead of "s3n":

    var textFile = sc.textFile("s3:////pagecounts-20081001-070000") textFile.first()