I'm having trouble fine-tuning the decomposable-attention-elmo model. I have been able to download the model: wget https://s3-us-west-2.amazonaws.com/allennlp/models/decomposable-attention-elmo-2018.02.19.tar.gz
. I'm trying to load the model and then fine-tune it on my data using the AllenNLP train command line command.
I also created a custom dataset Reader which is similar to the SNLIDatasetReader
and it seems to be working well.
I created a .jsonnet
file, similar to what is here, but I'm having trouble getting it to work.
When I use this version:
// Configuraiton for a textual entailment model based on:
// Parikh, Ankur P. et al. “A Decomposable Attention Model for Natural Language Inference.” EMNLP (2016).
{
"dataset_reader": {
"type": "custom_reader",
"token_indexers": {
"elmo": {
"type": "elmo_characters"
}
},
"tokenizer": {
"end_tokens": ["@@NULL@@"]
}
},
"train_data_path": "examples_train_",
"validation_data_path": "examples_val_",
"model": {
"type": "from_archive",
"archive_file": "decomposable-attention-elmo-2018.02.19.tar.gz",
"text_field_embedder": {
"token_embedders": {
"elmo": {
"type": "elmo_token_embedder",
"do_layer_norm": false,
"dropout": 0.2
}
}
},
},
"data_loader": {
"batch_sampler": {
"type": "bucket",
"batch_size": 64
}
},
"trainer": {
"num_epochs": 140,
"patience": 20,
"grad_clipping": 5.0,
"validation_metric": "+accuracy",
"optimizer": {
"type": "adagrad"
}
}
}
I get an error:
File "lib/python3.6/site-packages/allennlp/common/params.py", line 423, in assert_empty
"Extra parameters passed to {}: {}".format(class_name, self.params)
allennlp.common.checks.ConfigurationError: Extra parameters passed to Model: {'text_field_embedder': {'token_embedders': {'elmo': {'do_layer_norm': False, 'dropout': 0.2, 'type': 'elmo_token_embedder'}}}}
Then, when I take that text_field_embedder
portion out, and use this version:
// Configuraiton for a textual entailment model based on:
// Parikh, Ankur P. et al. “A Decomposable Attention Model for Natural Language Inference.” EMNLP (2016).
{
"dataset_reader": {
"type": "fake_news",
"token_indexers": {
"elmo": {
"type": "elmo_characters"
}
},
"tokenizer": {
"end_tokens": ["@@NULL@@"]
}
},
"train_data_path": "examples_train_",
"validation_data_path": "examples_val_",
"model": {
"type": "from_archive",
"archive_file": "decomposable-attention-elmo-2018.02.19.tar.gz",
},
"data_loader": {
"batch_sampler": {
"type": "bucket",
"batch_size": 64
}
},
"trainer": {
"num_epochs": 140,
"patience": 20,
"grad_clipping": 5.0,
"validation_metric": "+accuracy",
"optimizer": {
"type": "adagrad"
}
}
}
I get an error:
raise ConfigurationError(msg)
allennlp.common.checks.ConfigurationError: key "token_embedders" is required at location "model.text_field_embedder."
The two errors seem contradictory and I'm not sure how to proceed with this fine-tuning.
We found out on GitHub that the problem was the old version of the model that @hockeybro was loading. The latest version right now is at https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz.