I am trying to iterate a loop and call go routine on an anonymous function and adding a waitgroup on each iteration. And passing a string to same anonymous function and appending the value to slice a. Since I am looping 10000 times length of the slice is expected to be 10000. But I see random numbers. Not sure what is the issue. Can anyone help me fix this problem?
Here is my code snippet
import (
"fmt"
"sync"
)
func main() {
var wg = new(sync.WaitGroup)
var a []string
for i := 0; i <= 10000; i++ {
wg.Add(1)
go func(s string) {
a = append(a, s)
wg.Done()
}("MaxPayne")
}
wg.Wait()
fmt.Println(len(a))
}
Notice how appending a slice, you actually make a new slice, and then assign it back to the slice variable. So you have un-controlled concurrent writing to the variable a
. Concurrent writing to the same value is not safe in Go (and most languages). In order to make it safe, you can serialize the writes with a mutex.
Try:
var lock sync.Mutex
var a []string
and
lock.Lock()
a = append(a, s)
lock.Unlock()
For more information about how a mutex works, see the tour and the sync package.
Here is a pattern to achieve a similar result, but without needing a mutex and still being safe.
package main
import (
"fmt"
"sync"
)
func main() {
const sliceSize = 10000
var wg = new(sync.WaitGroup)
var a = make([]string, sliceSize)
for i := 0; i < sliceSize; i++ {
wg.Add(1)
go func(s string, index int) {
a[index] = s
wg.Done()
}("MaxPayne", i)
}
wg.Wait()
}
This isn't exactly the same as your other program, but here's what it does.
The memory access is now safe even without a mutex, because each goroutine is only writing to it's respective index (and each goroutine gets a unique index). Therefore, none of these concurrent memory writes conflict with each other. After initially creating the slice with the desired size, the variable a
itself doesn't need to be assigned to again, so the original memory race is eliminated.