package main
import "fmt"
import "time"
import (
"runtime"
"sync/atomic"
)
func init() {
runtime.GOMAXPROCS(runtime.NumCPU())
}
func main() {
var t1 = time.Now()
var ops uint64 = 0
go func() {
for {
time.Sleep(time.Second)
opsFinal := atomic.LoadUint64(&ops)
fmt.Println("ops:", opsFinal, "qps:", opsFinal/uint64(time.Since(t1).Seconds()))
}
}()
for {
atomic.AddUint64(&ops, 1)
//runtime.Gosched()
}
}
In this case out put "ops: 0 qps: 0" every second, why canot read ops in goroutine ?
but when add runtime.Gosched(), everything is ok!
Can every body help me?
My read of The Go Memory Model is that this is a correct execution of the program you've written: nothing guarantees that the AddUint64()
calls in the main program happen before the LoadUint64()
calls in the goroutine, so it's legitimate for every read of the variable to happen before any write occurs. I wouldn't be totally shocked if the compiler knew about "sync/atomic"
as special and concluded that the result of the increment was unobservable, so just deleted the final loop.
Both The Go Memory Model and the sync/atomic documentation recommend against the approach you're using. "sync/atomic"
admonishes:
Share memory by communicating; don't communicate by sharing memory.
A better program might look like this:
package main
import "fmt"
import "time"
func count(op <-chan struct{}) {
t1 := time.Now()
ops := 0
tick := time.Tick(time.Second)
for {
select {
case <-op:
ops++
case <-tick:
dt := time.Since(t1).Seconds()
fmt.Printf("ops: %d qps: %f\n", ops, float64(ops)/dt)
}
}
}
func main() {
op := make(chan struct{})
go count(op)
for {
op <- struct{}{}
}
}
Note that no state is shared between the main program and the goroutine, except the data that is sent across the channel.