I have an program that listens on udp for log traffic, attempts to parse it, and then insert it into Redis. At a certain level of traffic, the memory seems to "explode" (Rapidly increase from a few hundred Megabytes to Gigabytes.
I've grabbed a heap profile shortly after this happens and it returns the following:
(pprof) top100 -cum
Total: 1731.3 MB
0.0 0.0% 0.0% 1731.3 100.0% gosched0
1162.5 67.1% 67.1% 1162.5 67.1% newdefer
0.0 0.0% 67.1% 1162.5 67.1% runtime.deferproc
0.0 0.0% 67.1% 1162.0 67.1% main.TryParse
0.0 0.0% 67.1% 438.0 25.3% runtime.main
301.5 17.4% 84.6% 437.5 25.3% main.main
136.0 7.9% 92.4% 136.0 7.9% runtime.malg
0.0 0.0% 92.4% 136.0 7.9% runtime.newproc
0.0 0.0% 92.4% 136.0 7.9% runtime.newproc1
1.5 0.1% 92.5% 131.3 7.6% main.RedisCuller
0.0 0.0% 92.5% 108.5 6.3% github.com/garyburd/redigo/redis.(*conn).Do
0.0 0.0% 92.5% 108.5 6.3% github.com/garyburd/redigo/redis.(*conn).readReply
0.0 0.0% 92.5% 108.5 6.3% github.com/garyburd/redigo/redis.(*pooledConnection).Do
95.8 5.5% 98.0% 95.8 5.5% cnew
0.0 0.0% 98.0% 95.8 5.5% runtime.cnewarray
34.0 2.0% 100.0% 34.0 2.0% runtime.convT2E
0.0 0.0% 100.0% 0.5 0.0% main.init
0.0 0.0% 100.0% 0.5 0.0% net/http/pprof.init
0.0 0.0% 100.0% 0.5 0.0% sync.(*Once).Do
0.0 0.0% 100.0% 0.5 0.0% syscall.Getenv
0.0 0.0% 100.0% 0.5 0.0% time.init
When the program is "healthy" the profile looks more like:
(pprof) top20 -cum
Total: 186.7 MB
0.0 0.0% 0.0% 186.7 100.0% gosched0
0.5 0.3% 0.3% 122.7 65.7% main.RedisCuller
0.0 0.0% 0.3% 103.5 55.4% github.com/garyburd/redigo/redis.(*pooledConnection).Do
0.0 0.0% 0.3% 103.0 55.2% github.com/garyburd/redigo/redis.(*conn).Do
0.0 0.0% 0.3% 103.0 55.2% github.com/garyburd/redigo/redis.(*conn).readReply
88.2 47.2% 47.5% 88.2 47.2% cnew
0.0 0.0% 47.5% 88.2 47.2% runtime.cnewarray
0.0 0.0% 47.5% 57.0 30.5% main.TryParse
57.0 30.5% 78.0% 57.0 30.5% newdefer
0.0 0.0% 78.0% 57.0 30.5% runtime.deferproc
34.0 18.2% 96.3% 34.0 18.2% runtime.convT2E
1.5 0.8% 97.1% 6.5 3.5% main.main
0.0 0.0% 97.1% 6.5 3.5% runtime.main
5.0 2.7% 99.7% 5.0 2.7% runtime.malg
0.0 0.0% 99.7% 5.0 2.7% runtime.newproc
0.0 0.0% 99.7% 5.0 2.7% runtime.newproc1
0.0 0.0% 99.7% 0.5 0.3% bufio.NewWriter
0.0 0.0% 99.7% 0.5 0.3% bufio.NewWriterSize
0.0 0.0% 99.7% 0.5 0.3% github.com/garyburd/redigo/redis.(*Pool).get
0.0 0.0% 99.7% 0.5 0.3% github.com/garyburd/redigo/redis.(*pooledConnection).get
The only defer I have in my code is surrounding the parsing function (since it can frequently fail):
for {
rlen, _, err := sock.ReadFromUDP(buf[0:])
checkError(err)
raw := logrow.RawRecord(string(buf[:rlen]))
go TryParse(raw, c)
}
...
func TryParse(raw logrow.RawRecord, c chan logrow.Record) {
defer func() {
if r := recover(); r != nil {
//log.Printf("Failed Parse due to panic: %v", raw)
return
}
}()
rec, ok := logrow.ParseRawRecord(raw)
if !ok {
return
//log.Printf("Failed Parse: %v", raw)
} else {
c <- rec
}
}
Does anyone see anything obvious I am doing wrong that could cause memory to suddenly balloon? Or maybe provide some direction in pinning it down?
Edit (More code around the logrow.Record Channel):
c := make(chan logrow.Record)
...
go RedisInserter(c, bucket, retention, pool)
func RedisInserter(c chan logrow.Record, bucket, retention int, p *redis.Pool) {
for rec := range c {
logrow.SendToRedis(rec, bucket, retention, p)
}
}
Turned out to be a leak in closured deferred function (a problem in Go itself) as we see in:
defer func() {
if r := recover(); r != nil {
//log.Printf("Failed Parse due to panic: %v", raw)
return
}
}()
So rebuilding with the latest version of Go resolved the problem ( Reference: https://codereview.appspot.com/10784043/ ). However, the smart thing to do is to of course design the ParseRecordFunction so it doesn't end up trying to slice out of bounds and cause a panic.