I had made a discord bot in python and have now added "Chatbot" feature in it using Tensorflow and NLTK. When I run the bot locally, it works absolutely fine without any issues but when I move it to my namecheap hosting package where I host my portfolio, it starts to give an error by saying :
OpenBLAS blas_thread_init: pthread_create failed for thread 29 of 64: Resource temporarily unavailable
and nltk and tensorflow don't get imported and the bot crashes.
I googled it and found a solution which tells to use os.environ['OPENBLAS_NUM_THREADS'] = '1'
before using any imports. This solved the previous error but now it gives another error saying:
Check failed: ret == 0 (11 vs. 0)Thread creation via pthread_create() failed.
The complete output on running python main.py
now is:
2021-06-10 11:18:19.606471: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-06-10 11:18:19.606497: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-06-10 11:18:21.090650: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-06-10 11:18:21.090684: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-06-10 11:18:21.090716: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (server270.web-hosting.com): /proc/driver/nvidia/version does not exist
2021-06-10 11:18:21.091042: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-10 11:18:21.092409: F tensorflow/core/platform/default/env.cc:73] Check failed: ret == 0 (11 vs. 0)Thread creation via pthread_create() failed.
To not make this question too long, the source files have already been hosted on GitHub here: https://github.com/Nalin-2005/The2020CoderBot And the README.md
tells which files contain which part of the bot.
The bot is being hosted on Namecheap shared hosting and the details and technical specs about the server are:
cat /proc/cpuinfo | grep 'model name' | uniq
): Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHzAs per my knowledge, both the issues are caused by limited RAM or CPU usage. But now, the Python script itself blocks the usage.
So, what causes this (If I am not correct) and how can I fix this?
After some time brainstorming and googling, i found Tensorflow Lite and it consumes less resources but offering the same performance* on my server and i could easily integrate it with the previous code to produce a more resource-efficient model. To the users who would want to know how to convert any keras model to Tensorflow lite, here are the instructions.
model.save("/path/to/model.h5")
with:converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open("/path/to/model.tflite", "wb") as f:
f.write(tflite_model)
model = tf.lite.Interpreter("/path/to/model.tflite")
model.allocate_tensors()
input_details = model.get_input_details()
output_details = model.get_output_details()
# prepare input data
model.set_tensor(input_details[0]['index'],input_data)
model.invoke()
output_data = model.get_tensor(output_details[0]['index'])
results = np.squeeze(output_data)