I wrote a simple bot using rasa. To handle messages I create the flask app and load the agent into that app. I receive user message and id from the request and put it to agent handle_text method then I got the response. The problem is that after I spoke one story that defined in my story.md my agent stops answering.
Here is my flask app
app = Flask(__name__)
# Define rasa interpreter
interpreter = None
# Define rasa agent
agent = None
@app.route('/')
def index():
# Receive message from request
message = request.args.get('msg')
# Receive user id from request
user_id = request.args.get('uid')
# Validation
if not message:
return 'No message specified in field \'msg\''
if not user_id:
return 'No user id specified in field \'uid\''
# Put received message into rasa agent
answers = agent.handle_text(message, sender_id=user_id)
# Define text for the response
text = None
if len(answers) > 0:
text = "User: {} | {}".format(user_id, answers[0].get('text'))
else:
text = "User: {} | Nothing to answer".format(user_id)
return text
if __name__ == '__main__':
# Load rasa interpreter
interpreter = RasaNLUInterpreter(NLU_PATH)
# Load rasa agent
agent = Agent.load(CORE_PATH, interpreter=interpreter)
app.run()
My stories.md is
## Simple flow
* greet
- utter_greet
* bye
- utter_bye
## Order pizza
* greet
- utter_greet
* order_pizza_type
- utter_finish_order_pizza
* bye
- utter_bye
## Story
* order_pizza_type
- utter_finish_order_pizza
## Generated Story -1054914010798310995
* greet
- utter_greet
* order_pizza_type{"Country": "mexican"}
- utter_finish_order_pizza
* bye
- utter_bye
## New Story
* greet
- utter_greet
* order_pizza_wish
- utter_finish_order_pizza
* bye
- utter_bye
and my config.yml
language: "en"
pipeline:
- name: "nlp_spacy"
- name: "tokenizer_spacy"
- name: "ner_crf"
- name: "tokenizer_whitespace"
- name: "intent_featurizer_count_vectors"
- name: "intent_classifier_tensorflow_embedding"
intent_tokenization_flag: true
intent_split_symbol: "+"
policies:
- name: "KerasPolicy"
featurizer:
- name: MaxHistoryTrackerFeaturizer
max_history: 5
state_featurizer:
- name: BinarySingleStateFeaturizer
- name: "MemoizationPolicy"
max_history: 5
- name: "FallbackPolicy"
nlu_threshold: 0.4
core_threshold: 0.3
My expected result
$ curl -X GET "https://localhost?msg=hello&uid=1"
$ curl -X GET "https://localhost?msg=I want to order pizza&uid=1"
$ curl -X GET "https://localhost?msg=Bye&uid=1"
$ curl -X GET "https://localhost?msg=hello&uid=1"
Response
> User: 1 | Hey! How are you?
> User: 1 | Ok I will deliver pizza for you
> User: 1 | Bye
> User: 1 | Hey! How are you?
But my actual results are
$ curl -X GET "https://localhost?msg=hello&uid=1"
$ curl -X GET "https://localhost?msg=I want to order pizza&uid=1"
$ curl -X GET "https://localhost?msg=Bye&uid=1"
$ curl -X GET "https://localhost?msg=hello&uid=1"
Response
> User: 1 | Hey! How are you?
> User: 1 | Ok I will deliver pizza for you
> User: 1 | Bye
> User: 1 | Nothing to answer
As you can see no response for the second message "hello" after one storyline has been spoken.
Same as suggested in the comments I'd suggest to use interactive learning to debug your bot and create new training stories. Currently you have very sparse training data.
Did you use augmentation for the training? If you did not specify the parameter differently, the default augmentation is set to 20
.
If you are using augmentation I'd suggest to also add another short story to handle a standalone greet
:
## Simple flow
* greet
- utter_greet
One more thing:
It is recommended to use general intents and distinguish them by the recognized entities.
Hence, instead of order_pizza_type
and order_pizza_wish
it would be better to have an intent order_pizza
or even order
and then slots for food_type
, product_to_order
(e.g. pizza
) and so on. If you have very similar intents such as order_pizza_type
and order_pizza_wish
NLU will have a tough live to distinguish them.