Search code examples
pythontornado

Tornado PipeIOStream: OSError: [Errno 9] Bad file descriptor


I've a little micro webservice which stores messages locally identified by IDs. To ensure that files won't be written at the same time, I've implemented a queue. The following code works just once, a second file upload throws traceback below, I really don't know how to handle the fd correctly.

from tornado import web, ioloop, gen
from tornado.queues import Queue
from tornado.iostream import PipeIOStream

class Sample:
    def __init__(self):
        self.queue = Queue()

    @gen.coroutine
    def write_queue(self):
        while True:
            item = yield self.queue.get()
            print("Message with id %s stored" % item[0])
            fd = open(item[0], 'ab')
            stream = PipeIOStream(fd.fileno())
            yield stream.write(item[1])
            stream.close_fd()

class MainHandler(web.RequestHandler):

    def initialize(self, store):
        self.store = store

    @gen.coroutine
    def put(self, id):
        yield self.store.queue.put((id, self.request.body))


def start(store):
    return web.Application([
        (r"/(.*)", MainHandler,
         {"store": store})
    ])

if __name__ == '__main__':
    store = Store()
    app = start(store)
    app.listen(8888)
    ioloop.IOLoop.current().add_callback(store.write_queue)
    ioloop.IOLoop.current().start()




ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f46657f46a8>, <Future finished exception=OSError(9, 'Bad file descriptor')>)
Traceback (most recent call last):

    stream = PipeIOStream(fd.fileno())
  File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 1643, in __init__
    self._fio = io.FileIO(self.fd, "r+")
OSError: [Errno 9] Bad file descriptor

Solution

    1. The file descriptor returned by open is not a pipe. It's generally legal to use regular files with PipeIOStream, but it's not useful on linux. Such file descriptors are always considered readable, and reading from them or writing to them always blocks. So using PipeIOStream like this is no better than simply doing fd.write(item[1]).

    2. You've opened the file in write-only mode, but PipeIOStream wraps its file in a read/write wrapper (I'm actually kind of surprised that this ever works, since real pipes are one-way). I think that's where this exception is coming from. If you opened the file in 'ab+' mode, I think it would work. I haven't tried this, though.