Search code examples
goconcurrencychannelgoroutine

How to handle errors in a goroutine


I have a service that use to upload file to AWS S3. I was trying to use with goroutines and without to upload the file. If I upload the file without goroutines, it should wait till finish then give the response, and if I use goroutines it will run in the background and faster to response to the client-side.

How about if that upload failed if I use goroutines? And then that file not uploaded to AWS S3? Can you tell me to handle this how?

here is my function to upload file

func uploadToS3(s *session.Session, size int64, name string , buffer []byte)( string , error) {

    tempFileName := "pictures/" + bson.NewObjectId().Hex() + "-" + filepath.Base(name)

    _, err := s3.New(s).PutObject(&s3.PutObjectInput{
        Bucket:             aws.String("myBucketNameHere"),
        Key:                aws.String(tempFileName),
        ACL:                aws.String("public-read"),
        Body:               bytes.NewReader(buffer),
        ContentLength:      aws.Int64(int64(size)),
        ContentType:        aws.String(http.DetectContentType(buffer)),
        ContentDisposition: aws.String("attachment"),
        ServerSideEncryption: aws.String("AES256"),
        StorageClass:       aws.String("INTELLIGENT_TIERING"),
    })

    if err != nil {
        return "", err
    }

    return tempFileName, err
}

func UploadFile(db *gorm.DB) func(c *gin.Context) {
    return func(c *gin.Context) {
        file, err := c.FormFile("file")

        f, err := file.Open()
        if err != nil {
            fmt.Println(err)
        }

        defer f.Close()
        buffer := make([]byte, file.Size)
        _, _ = f.Read(buffer)
        s, err := session.NewSession(&aws.Config{
            Region: aws.String("location here"),
            Credentials: credentials.NewStaticCredentials(
                    "id",
                    "key",
                    "",
                ),
        })
        if err != nil {
            fmt.Println(err)
        }

        go uploadToS3(s, file.Size, file.Filename, buffer)

        c.JSON(200, fmt.Sprintf("Image uploaded successfully"))
    }
}

I was thinking as well how about if there many request to upload a file over 10000+ per 5-10mins ? would some file can't be upload because too many request?


Solution

  • For any asynchronous task - such as uploading a file in a background go-routine - one can write the uploading function in such a way to return a chan error to the caller. The caller can then react to the file uploads eventual error (or nil for no error) at a later time by reading from the chan error.

    However if you are accepting upload requests, I'd suggest instead to created a worker upload go-routine, that accepts file uploads via a channel. An output "error" channel can track success/failure. And if need be, the error uploaded could be written back to the original upload channel queue (including a retry tally & retry max - so a problematic payload does not loop forever) .