How do you run fasttext on a corpus and use those embeddings in mxnet symbol embedding layer?
To do that you first need to load the matrix that contains FastText embedding and then pass it as an initializer to the embedding layer:
embed_layer_3 = mx.sym.Embedding(data=input_x_3, weight=the_emb_3, input_dim=vocab_size, output_dim=embedding_dim, name='vocab_embed')
I took this example from here, where they use Glove Embedding, but the idea is the same.
I would highly recommend to use Gluon API instead of Symbol API. In that case it will be much easier for you to use all goodness of GluonNLP package, which already has pretrained FastText embedding. See this tutorial to learn how to use Fasttext in GluonNLP