Python server for streaming request body content

I am trying to create python intellectual proxy-server that should be able for streaming large request body content from client to the some internal storages (that may be amazon s3, swift, ftp or something like this). Before streaming server should requests some internal API server that determines parameters for uploading to internal storages. The main restriction is that it should be done in one HTTP operation with method PUT. Also it should work asynchronously because there will be a lot of file uploads.

What solution allows me to read chunks from upload content and starts streaming this chunks to internal storages befor user will have uploaded whole file? All python web applications that I know wait for a whole content will be received before give management to the wsgi applications/python web server.

One of the solutions that I found is tornado fork . But it is unofficial and tornado developers don't hurry to include it into the main branch. So may be you know some existing solutions for my problem? Tornado? Twisted? gevents?


  • It seems I have a solution using gevent library and monkey patching:

    from gevent.monkey import patch_all
    from gevent.pywsgi import WSGIServer
    def stream_to_internal_storage(data):
    def simple_app(environ, start_response):
        bytes_to_read = 1024
        while True:
            readbuffer = environ["wsgi.input"].read(bytes_to_read)
            if not len(readbuffer) > 0:
        start_response("200 OK", [("Content-type", "text/html")])
        return ["hello world"]
    def run():
        config = {'host': '', 'port': 45000}
        server = WSGIServer((config['host'], config['port']), application=simple_app)
    if __name__ == '__main__':

    It works well when I try to upload huge file:

    curl -i -X PUT --progress-bar --verbose --data-binary @/path/to/huge/file ""