I am trying to decrypt files that arrives periodically into our s3 bucket.How can I process if the file size is huge (eg 10GB) ,since the computing resources of Lambda is Limited. Im not sure if it is necessary download the whole file into Lambda and perform the decryption or is there some other way we can chunk the file and process?
Edit :- Processing the file here includes decrypting the file and the parse each row and write it to persistent store like a SQL queue or Database.
You can set the byte-range in the GetObjectRequest to load a specific range of bytes from an S3 object.
The following example comes from the AWS official documentation on S3 GetObject API:
// Get a range of bytes from an object and print the bytes.
GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key).withRange(0, 9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");
displayTextInputStream(objectPortion.getObjectContent());
For more information, you can visit the documentation here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html