Search code examples
goconcurrencytimeout

How to "continue" after timeout (deadline) in concurrent goroutines?


I'm doing concurrent GET requests to different URL's (say 1000 in this case). For these requests, i followed consumer-producer design. There are 50 workers (goroutines - crawlers), and 1 producer (filling the channel with url's).

The problem: I've set the timeout in the client as 15 seconds (I don't want to wait for more than 15 seconds per request). But when a URL makes the goroutine wait more than 15 seconds, my code exits with

context deadline exceeded (Client.Timeout or context cancellation while reading body)

Wanted behvaiour: When a server takes more than 15 seconds, I want the relevant goroutine to simply continue on the next URL

Here is the code piece:

package main

import (
    "bufio"
    "fmt"
    "io"
    "log"
    "net/http"
    "os"
    "sync"
    "time"
)

func crawler(wg *sync.WaitGroup, urlChannel <-chan string) {

    defer wg.Done()
    client := &http.Client{Timeout: 15 * time.Second} // single client is sufficient for multiple requests

    for urlItem := range urlChannel {

        req1, _ := http.NewRequest("GET", "http://"+urlItem, nil)                                           // generating the request
        req1.Header.Add("User-agent", "Mozilla/5.0 (X11; Linux i586; rv:31.0) Gecko/20100101 Firefox/74.0") // changing user-agent
        resp1, respErr1 := client.Do(req1)                                                                  // sending the prepared request and getting the response
        if respErr1 != nil {
            fmt.Println("server error", urlItem)
            continue
        }

        if resp1.StatusCode/100 == 2 { // means server responded with 2xx code

            f1, fileErr1 := os.Create("200/" + urlItem + "_original.txt") // creating the relative file
            if fileErr1 != nil {
                fmt.Println("file error", urlItem)
                log.Fatal(fileErr1)
            }

            _, writeErr1 := io.Copy(f1, resp1.Body) // writing the sourcecode into our file
            if writeErr1 != nil {
                fmt.Println("file error", urlItem)
                log.Fatal(writeErr1)
            }
            f1.Close()
            resp1.Body.Close()

            fmt.Println("success:", urlItem)

        }
    }
}

func main() {

    var wg sync.WaitGroup // synchronization to wait for all the goroutines

    file, err := os.Open("urls.txt") // the file containing the url's
    if err != nil {
        log.Fatal(err)
    }
    defer file.Close() // don't forget to close the file

    urlChannel := make(chan string) // create a channel to store all the url's

    _ = os.Mkdir("200", 0755) // if it's there, it will create an error, and we will simply ignore it

    for i := 0; i < 50; i++ {
        wg.Add(1)
        go crawler(&wg, urlChannel)
    }

    scanner := bufio.NewScanner(file) // each line has another url
    for scanner.Scan() {
        urlChannel <- scanner.Text()
    }
    close(urlChannel)
    wg.Wait()
}

And specifically, I thought I was handling the problem in here (but apparently I was not):

resp1, respErr1 := client.Do(req1)                                                              
// sending the prepared request and getting the response
if respErr1 != nil {
    fmt.Println("server error", urlItem)
    continue
}

How can I achieve the wanted behaviour (skipping the URL if timeout is reached)?


Solution

  • It is probably here:

               _, writeErr1 := io.Copy(f1, resp1.Body) // writing the sourcecode into our file
                if writeErr1 != nil {
                    fmt.Println("file error", urlItem)
                    log.Fatal(writeErr1)
                }
    

    The result of this operation is not necessarily a write error, it can be a read error, and in this case, it probably is. It times out reading for the response body.

    Do not call log.Fatal for this case.