Search code examples
pythonamazon-s3aws-lambdaboto3aws-serverless

How to read csv file from s3 bucket in AWS Lambda?


I am trying to read the content of a csv file which was uploaded on an s3 bucket. To do so, I get the bucket name and the file key from the event that triggered the lambda function and read it line by line. Here is my code:

import json
import os
import boto3
import csv

def lambda_handler(event,  context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        file_key = record['s3']['object']['key']
        s3 = boto3.client('s3')
        csvfile = s3.get_object(Bucket=bucket, Key=file_key)
        csvcontent = csvfile['Body'].read().split(b'\n')
        data = []
        with open(csvfile['Body'], 'r') as csv_file:
          csv_file = csv.DictReader(csv_file)
          data = list(csv_file)

The exact error I’m getting on the CloudWatch is:

[ERROR] TypeError: expected str, bytes or os.PathLike object, not list
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 19, in lambda_handler
    with open(csvcontent, 'r') as csv_file:

Could someone help me fix this? I appreciate any help you can provide as I am new to lambda


Solution

  • csvfile = s3.get_object(Bucket=bucket, Key=file_key)
    csvcontent = csvfile['Body'].read().split(b'\n')
    

    Here you have already retrieved the file contents and split it into lines. I'm not sure why you're trying to open something again, you can just pass csvcontent into your reader:

    csv_data = csv.DictReader(csvcontent)