Search code examples
numpyzarr

Efficient way of storing 1TB of random data with Zarr


I'd like to store 1TB of random data backed by a zarr on disk array. Currently, I am doing something like the following:

import numpy as np
import zarr
from numcodecs import Blosc

compressor = Blosc(cname='lz4', clevel=5, shuffle=Blosc.BITSHUFFLE)
store = zarr.DirectoryStore('TB1.zarr')
root = zarr.group(store)
TB1 = root.zeros('data',
           shape=(1_000_000, 1_000_000),
           chunks=(20_000, 5_000),
           compressor=compressor,
           dtype='|i2')

for i in range(1_000_000): 
    TB1[i, :1_000_000] = np.random.randint(0, 3, size=1_000_000, dtype='|i2')

This is going to take some time -- I know things could probably be improved if I wasn't always generating 1_000_000 random numbers and instead reusing the array but I'd like some more randomness for now. Is there a better way to go about building this random dataset ?

Update 1

Using bigger numpy blocks speeds things up a bit:

for i in range(0, 1_000_000, 100_000): 
    TB1[i:i+100_000, :1_000_000] = np.random.randint(0, 3, size=(100_000, 1_000_000), dtype='|i2')

Solution

  • I'd recommend using Dask Array which will enable parallel computation of random numbers and storage, e.g.:

    import zarr                                                           
    from numcodecs import Blosc                                           
    import dask.array as da                                               
    
    shape = 1_000_000, 1_000_000                                          
    dtype = 'i2'                                                          
    chunks = 20_000, 5_000                                                
    compressor = Blosc(cname='lz4', clevel=5, shuffle=Blosc.BITSHUFFLE) 
    
    # set up zarr array to store data
    store = zarr.DirectoryStore('TB1.zarr')
    root = zarr.group(store) 
    TB1 = root.zeros('data', 
                     shape=shape, 
                     chunks=chunks, 
                     compressor=compressor, 
                     dtype=dtype) 
    
    # set up a dask array with random numbers
    d = da.random.randint(0, 3, size=shape, dtype=dtype, chunks=chunks)  
    
    # compute and store the random numbers
    d.store(TB1, lock=False)                                             
    

    By default Dask will compute using all available local cores, but can also be configured to run on a cluster via the Distributed package.