I'm trying to parallelize calls to an API to speed things up, but I'm facing a problem where I need to stop spinning up goroutines to call the API if I receive an error from one of the goroutine calls. Since I am closing the channel twice(once in the error handling part and when the execution is done), I'm getting a panic: close of closed channel
error. Is there an elegant way to handle this without the program to panic? Any help would be appreciated!
The following is the pseudo-code snippet.
for i := 0; i < someNumber; i++ {
go func(num int, q chan<- bool) {
value, err := callAnAPI()
if err != nil {
close(q)//exit from the for-loop
}
// process the value here
wg.Done()
}(i, quit)
}
close(quit)
To mock my scenario, I have written the following program. Is there any way to exit the for-loop gracefully once the condition(commented out) is satisfied?
package main
import (
"fmt"
"sync"
)
func receive(q <-chan bool) {
for {
select {
case <-q:
return
}
}
}
func main() {
quit := make(chan bool)
var result []int
wg := &sync.WaitGroup{}
wg.Add(10)
for i := 0; i < 10; i++ {
go func(num int, q chan<- bool) {
//if num == 5 {
// close(q)
//}
result = append(result, num)
wg.Done()
}(i, quit)
}
close(quit)
receive(quit)
wg.Wait()
fmt.Printf("Result: %v", result)
}
You can use context package which defines the Context type, which carries deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes.
package main
import (
"context"
"fmt"
"sync"
)
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // cancel when we are finished, even without error
wg := &sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func(num int) {
defer wg.Done()
select {
case <-ctx.Done():
return // Error occured somewhere, terminate
default: // avoid blocking
}
// your code here
// res, err := callAnAPI()
// if err != nil {
// cancel()
// return
//}
if num == 5 {
cancel()
return
}
fmt.Println(num)
}(i)
}
wg.Wait()
fmt.Println(ctx.Err())
}
Try on: Go Playground
You can also take a look to this answer for more detailed explanation.