actions SDK does not recognize any other intent from the action.json. I've read that that is not a bug in this post: unable to read intents
What I don't understand is why do we have the option to define actions, if they are not recognised by the SDK?
Is there any other way to add more intents without using DialogFlow?
That is correct, it is not a bug. The Intents listed in the actions.json file are primarily used to do matching for the initial Intents (plural - they help identify which initial Intent to use if you have multiple ones defined). They can help to do conversation shaping and suggest what patterns the speech-to-text parser should look for, but they don't mandate the parser follow them - I would venture this is intentional to allow for flexibility in the various Natural Language Parsers.
The latter is probably why they're, ultimately, not used. Unlike Alexa, which requires a wide range of exact text to match for its Intent definitions, Google probably started going that route and realized that it would be better to hand it off to other NLPs, either your own or commercial ones, which could handle the flexibility of how humans actually speak. (And then they bought one to provide as a suggested tool to use.)
So the Actions SDK has primarily become the tool to use if you do intend to hand the language parsing off to another tool. There isn't much advantage to using it over any other tool otherwise.
You're not obligated to use Dialogflow. You can use any NLP system that will accept text input for the language you need. Google also provides direct integration with Converse.AI, and I suspect that any other NLP out there will provide directions of how to integrate them with Actions.