This is my code
class Time :
@app.route('/time', methods = ['GET'])
def get() :
start = time.time()
for i in range(30000) :
for j in range(30000) :
pass
return str(time.time() - start)
I have tried many tools to solve it, such as tornado, gunicorn and python's multi-thread and multi-processing, but all failed.Once I open the web localhost:5000/time, the time I open two webs simultaneous is much slower than I open a web two times.
In my opinion, if I accomplish the multi-processing, the time should be half.
Edit:
this code is used to build a multi-processing app by tornado
if __name__ == "__main__" :
http_server = HTTPServer(WSGIContainer(app))
http_server.listen(5000)
IOLoop.instance().start()
and if I open a web two times, it will cost 23s, but if I open two webs at the same time, it will cost 26s. I hope it can cost 11s, equal to I open a web.
You're mixing up a bit.
Gunicorn is used to build dynamic API in a more efficient way, and typically you should use it in pair with nginx to actually gain in efficiency. I.e. Gunicorn to build dynamic APIs and nginx to redistribute the static ones.
You can read here in more details https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn
Here is working Gunicorn setup:
echo "--------------- Kill workers ---------------"
ps aux | grep gunicorn | grep myniceappname | awk '{ print $2 }' | xargs kill -9
echo "--------------- Start workers ---------------"
gunicorn run:app --name myniceappname --bind 0.0.0.0:5000 --workers 4 --reload