I'm writing a Slackbot that will monitor to a Slack channel and only respond to a conversation when it hears about one of the two things it's interested in. My dialog therefore looks like this:
|
|-[#intent1]-...
|
|-[#intent2]-...
|
|-[anything_else]
Most of the time, I expect the anything_else
block to be triggered, but now and again, messages matching #intent1
or #intent2
will be matched.
I'm in the process of training the workspace, and I find that the 'Ask Watson' feed in the workspace editor always matches one of the two intents. Really, I would like to train it away from identifying random conversation with those intents, and was intending to use the drop-down box to choose that no intent should be identified. However, I find it's not possible to select 'no intent' from that box.
Is it recommended to have an intent which is for 'random rubbish', so I can train the model, or would that produce bad results from the training?
Check out https://www.ibm.com/watson/developercloud/doc/conversation/irrelevant_utterance.html You can now categorize inputs as irrelevant to your purpose. If you are having intents triggered by strings that you don't want to match those intents, you might want to add more positive examples to those intents, and also enter strings that are "just over the line," which you don't want to match those intents, and mark them irrelevant.