I wrote a very simple script in python 2.7 called slowcat.py which allows me to cat a file with a certain byte rate per second. Now the problem is that the rate that's given is doubled on the output stream. So rate=1 byte results in 2 bytes per second and rate=4 bytes results in 8 bytes per second. See the code snippet for the actual program:
#!/usr/bin/env python
import argparse
import time
import sys
import os
def get_configuration():
"""
Returns a populated configuration
"""
parser = argparse.ArgumentParser()
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r'),
nargs='?', help="Input file")
parser.add_argument('--rate', type=int, default=666880,
help="Output rate in bytes per second")
return parser.parse_args()
def main():
cfg = get_configuration()
size = os.path.getsize(cfg.file.name)
bytes_read = 0
t1 = time.time()
while True:
n = min(cfg.rate, size-bytes_read)
if n <= 0:
break
buf = cfg.file.read(n)
sys.stdout.write(buf)
sys.stdout.flush()
bytes_read += n
t2 = time.time()
if t2-t1 < 1.0:
time.sleep(1.0 - (t2-t1))
t1 = t2
if __name__ == "__main__":
main()
Now my question is: Why is the output rate double of what I pass it on the command line? If you copy paste the code snippet you can easily try it out on your system e.g. python slowcat.py --rate 2 slowcat.py
The problem is the t1 = t2
at the end of the while
loop. That sets t1
to the time before the sleep
delay, so the next time you test the elapsed time interval it will be more than 1 second, and thus the sleep
call will be skipped.
To fix it, change that last line to
t1 = time.time()
BTW, it may be more accurate to use the new time.perf_counter() rather than time.time()