Search code examples
pythonbottlegunicorn

Sharing an object between Gunicorn workers, or persisting an object within a worker


I'm writing a WSGI app using an Nginx / Gunicorn / Bottle stack that accepts a GET request, returns a simple response, and then writes a message to RabbitMQ. If I were running the app through straight Bottle, I'd be reusing the RabbitMQ connection every time the app recieves a GET. However, in Gunicorn, it looks like the workers are destroying and recreating the MQ connection every time. I was wondering if there's a good way to reuse that connection.

More detailed info:

##This is my bottle app
from bottle import blahblahblah
import bottle
from mqconnector import MQConnector

mqc = MQConnector(ip, exchange)

@route('/')
def index():
  try:
    mqc
  except NameError:
    mqc = MQConnector(ip, exchange)

  mqc.publish('whatever message')
  return 'ok'

if __name__ == '__main__':
  run(host='blah', port=808)
app = bottle.default_app()

Solution

  • Okay, this took me a little while to sort out. What was happening was, every time a new request came through, Gunicorn was running my index() method and, as such, creating a new instance of MQConnector.

    The fix was to refactor MQConnector such that, rather than being a class, it was just a bunch of methods and variables. That way, each worker referred to the same MQConnector every time, rather than creating a new instance of MQConnector. Finally, I passed MQConnector's publish() function along.

    #Bottle app
    from blah import blahblah
    import MQConnector
    
    @route('/')
    def index():
      blahblah(foo, bar, baz, MQConnector.publish)
    

    and

    #MQConnector
    import pika
    mq_ip = "blah"
    exhange_name="blahblah"
    
    connection=pika.BlockingConnection(....
    ...
    
    def publish(message, r_key):
      ...
    

    Results: A call that used to take 800ms now takes 4ms. I used to max out at 80 calls/second across 90 Gunicorn workers, and now I max out around 700 calls/second across 5 Gunicorn workers.