Search code examples
gohashmapobject-pooling

Pooling Maps in Golang


I was curious if anyone has tried to pool maps in Go before? I've read about pooling buffers previously, and I was wondering if by similar reasoning it could make sense to pool maps if one has to create and destroy them frequently or if there was any reason why, a priori, it might not be efficient. When a map is returned to the pool, one would have to iterate through it and delete all elements, but it seems a popular recommendation is to create a new map instead of deleting the entries in a map which has already been allocated and reusing it which makes me think that pooling maps may not be as beneficial.


Solution

  • If your maps change (a lot) in size by deleting or adding entries this will cause new allocations and there will be no benefit of pooling them.

    If your maps will not change in size but only the values of the keys will change then pooling will be a successful optimization.

    This will work well when you read table-like structures, for instance CSV files or database tables. Each row will contain exactly the same columns, so you don't need to clear any entry.

    The benchmark below shows no allocation when run with go test -benchmem -bench . to

    package mappool
    
    import "testing"
    
    const SIZE = 1000000
    
    func BenchmarkMap(b *testing.B) {
        m := make(map[int]int)
    
        for i := 0; i < SIZE; i++ {
            m[i] = i
        }
    
        b.ResetTimer()
    
        for i := 0; i < b.N; i++ {
            for i := 0; i < SIZE; i++ {
                m[i] = m[i] + 1
            }
        }
    }