I'm using redis-py to process bulk insertions into a Redis store.
I wrote the following very simple method:
import redis
def push_metadata_to_redis(list_of_nested_dictionaries):
redis_client = redis.Redis(host='localhost', port=6379, db=0)
redis_pipeline = redis_client.pipeline(transaction=False)
for dictionary in list_of_nested_dictionaries:
for k, inner_dict in dictionary.items()
redis_pipeline.hset(k, mapping=inner_dict)
result = redis_pipeline.execute(raise_on_error=True)
print(result)
Basically it:
hset
)Each dictionary
contains ~10k elements, so redis_pipeline.execute(raise_on_error=True)
happens once every ~10k hset
.
I noticed that after few minutes, result
value step from arrays of 0
s to arrays of 4
s and this worries me.
On one side I expect that any error should be raised as Exception (raise_on_error=True
), but on the other side I'm failing to find any reference about this behaviour in the documentation and I don't understand what does that mean.
So my questions are:
result
equals to an array of 4
s mean that something went wrong in the redis_pipeline.execute(raise_on_error=True)
operation?Thanks in advance.
So when using the HSET command the return value is the number of fields that were added
# check if key exists
127.0.0.1:6379> EXISTS key1
(integer) 0
# add a hash with 4 k/v pairs
127.0.0.1:6379> HSET key1 a 1 b 1 c 1 d 1
(integer) 4
# Set same fields for an existing hash
127.0.0.1:6379> HSET key1 a 1 b 1 c 1 d 1
(integer) 0
# Add an additional k/v pair
127.0.0.1:6379> HSET key1 a 1 b 1 c 1 d 1 e 1
(integer) 1
127.0.0.1:6379> HSET key1 f 1
(integer) 1
so it's possible that those entries with 0 already existed in the cache and no new fields were added.