Search code examples
gosemaphoregoroutine

Why is using a semaphore slowing down my Go program


I've made a program that scrapes all the pages of a website using goroutines:

func main() {
    start := time.Now()

    knownUrls := getKnownURLs(os.Getenv("SITEMAP_URL"))

    var wg sync.WaitGroup
    for index, url := range knownUrls {
        wg.Add(1)

        fmt.Printf("%d/%d\n", index+1, len(knownUrls))

        go func() {
            if err := indexArticleFromURL(url, client); err != nil {
                log.Fatalf("Error indexing doc: %s", err)
            }
            wg.Done()
        }()
    }

    wg.Wait()

    elapsed := time.Since(start)
    fmt.Printf("Took %s", elapsed)
}

This works shockingly fast, 5.9s for a thousand pages to be exact. But it bothers me that if a website has thousands of pages it will create thousands of goroutines.

So I refactored it with a package called semaphore. From what I understand it should limit the amount of goroutines to what the processor can handle. Shouldn't decrease performance, since the program above already physically could not use more threads than the CPU can provide.

func main() {
    start := time.Now()
    ctx := context.Background()

    knownUrls := getKnownURLs(os.Getenv("SITEMAP_URL"))

    var (
        maxWorkers = runtime.GOMAXPROCS(0)
        sem        = semaphore.NewWeighted(int64(maxWorkers))
    )

    for index, url := range knownUrls {
        if err := sem.Acquire(ctx, 1); err != nil {
            log.Printf("Failed to acquire semaphore: %v", err)
            break
        }

        fmt.Printf("%d/%d\n", index+1, len(knownUrls))

        go func() {
            if err := indexDocFromURL(url, client); err != nil {
                log.Fatalf("Error indexing doc: %s", err)
            }
            sem.Release(1)
        }()
    }

    if err := sem.Acquire(ctx, int64(maxWorkers)); err != nil {
        log.Printf("Failed to acquire semaphore: %v", err)
    }

    elapsed := time.Since(start)
    fmt.Printf("Took %s", elapsed)
}

But now when I run the program it takes significantly more time: 11+ seconds.

Seems like this shouldn't be the case, since runtime.GOMAXPROCS(0) returns the maximum number of CPUs that can be executing simultaneously.

Why is the semaphore version slower? And how do I make it match the performance of the unsafe program, while making sure the number of goroutines will not crash it?


Solution

  • With your original code, you have one thread per CPU core, but you have more goroutines than threads. This is fine and normal: The Go runtime internally task-switches between goroutines without getting the kernel scheduler involved, parking one whenever it's waiting for I/O and switching to another one. If a task is 99.999% waiting for a network resource and .0001% CPU, then one CPU core can comfortably handle 1,000,000 goroutines at a time -- you need enough memory to allocate the heap, and the network protocol needs to be latency-tolerant enough that the remote server won't time out if a goroutine takes some time to be scheduled (and if your connections are to the same server, it needs to be willing to handle that load) but as long as you have that memory, and the remote service (and intervening network stack) is just as robust as your client-side code is, you're fine. (HTTP/2 supports multiplexing to run an unbounded number of requests over a single TCP connection -- hopefully you're using it here).


    When you introduce a semaphore with only as many slots as CPU cores, you completely defeat this functionality: Now instead of being able to balance thousands of requests at a time (by working on ones that are ready and parking the ones that aren't), you're slowing your code down to only process as many requests as CPU cores exist. Of course it's slower; how could it be anything but?