Search code examples
deep-learningsentiment-analysisbert-language-modelreproducible-researchallennlp

Span-Aste with allennlp - testing against new unseen and unlabeled data


I am trying to use this colab of this github page to extract the triplet [term, opinion, value] from a sentence from my custom dataset.

Here is an overview of the system architecture: Span-Aste system

While I can use the sample offered in the colab and also train the model with my data, I don't know I should re-use this against an unlabeled sample.

If I try to run the colab as-is changing only the test and dev data with unlabeled data, I encounter this error:

    DEVICE=0 {   "names": "sample",   "seeds": [
        0   ],   "sep": ",",   "name_out": "results",   "kwargs": {
        "trainer__cuda_device": 0,
        "trainer__num_epochs": 10,
        "trainer__checkpointer__num_serialized_models_to_keep": 1,
        "model__span_extractor_type": "endpoint",
        "model__modules__relation__use_single_pool": false,
        "model__relation_head_type": "proper",
        "model__use_span_width_embeds": true,
        "model__modules__relation__use_distance_embeds": true,
        "model__modules__relation__use_pair_feature_multiply": false,
        "model__modules__relation__use_pair_feature_maxpool": false,
        "model__modules__relation__use_pair_feature_cls": false,
        "model__modules__relation__use_span_pair_aux_task": false,
        "model__modules__relation__use_span_loss_for_pruners": false,
        "model__loss_weights__ner": 1.0,
        "model__modules__relation__spans_per_word": 0.5,
        "model__modules__relation__neg_class_weight": -1   },   "root": "aste/data/triplet_data" } {   "root": "/content/Span-ASTE/aste/data/triplet_data/sample",   "train_kwargs": {
        "seed": 0,
        "trainer__cuda_device": 0,
        "trainer__num_epochs": 10,
        "trainer__checkpointer__num_serialized_models_to_keep": 1,
        "model__span_extractor_type": "endpoint",
        "model__modules__relation__use_single_pool": false,
        "model__relation_head_type": "proper",
        "model__use_span_width_embeds": true,
        "model__modules__relation__use_distance_embeds": true,
        "model__modules__relation__use_pair_feature_multiply": false,
        "model__modules__relation__use_pair_feature_maxpool": false,
        "model__modules__relation__use_pair_feature_cls": false,
        "model__modules__relation__use_span_pair_aux_task": false,
        "model__modules__relation__use_span_loss_for_pruners": false,
        "model__loss_weights__ner": 1.0,
        "model__modules__relation__spans_per_word": 0.5,
        "model__modules__relation__neg_class_weight": -1   },   "path_config": "/content/Span-ASTE/training_config/aste.jsonnet",   "repo_span_model": "/content/Span-ASTE",   "output_dir": "model_outputs/aste_sample_c7b00b66bf7ec669d23b80879fda043d",   "model_path": "models/aste_sample_c7b00b66bf7ec669d23b80879fda043d/model.tar.gz",   "data_name": "sample",   "task_name": "aste" }
    # of original triplets:  11
    # of triplets for current setup:  11
    # of original triplets:  7
    # of triplets for current setup:  7 Traceback (most recent call last):   File "/usr/lib/python3.7/pdb.py", line 1699, in main
        pdb._runscript(mainpyfile)   
File "/usr/lib/python3.7/pdb.py", line 1568, in _runscript
        self.run(statement)   
File "/usr/lib/python3.7/bdb.py", line 578, in run
        exec(cmd, globals, locals)   File "<string>", line 1, in <module>   
File "/content/Span-ASTE/aste/main.py", line 1, in <module>
        import json   
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 138, in Fire
        component_trace = _Fire(component, args, parsed_flag_args, context, name)   File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 468, in
    _Fire
        target=component.__name__)   
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 672, in
    _CallAndUpdateTrace
        component = fn(*varargs, **kwargs)   File "/content/Span-ASTE/aste/main.py", line 278, in main
        scores = main_single(p, overwrite=True, seed=seeds[i], **kwargs)   
File "/content/Span-ASTE/aste/main.py", line 254, in main_single
        trainer.train(overwrite=overwrite)   
File "/content/Span-ASTE/aste/main.py", line 185, in train
        self.setup_data()   
File "/content/Span-ASTE/aste/main.py", line 177, in setup_data
        data.load()   
File "aste/data_utils.py", line 214, in load
        opinion_offset=self.opinion_offset,   
File "aste/evaluation.py", line 165, in read_inst
        o_output = line[2].split()  # opinion IndexError: list index out of range Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program
    > /content/Span-ASTE/aste/evaluation.py(165)read_inst()
    -> o_output = line[2].split()  # opinion (Pdb)

From my understanding, it seems that it is searching for the labels to start the evaluation. The problem is that I don't have those labels - although I have provided training set with similar data and labels associated.

I am new in deep learning and also allennlp so I am probably missing knowledge. I have tried to solve this for the past 2 weeks but I am still stuck, so here I am.


Solution

  • KeyPi, this is a supervised learning model, it needs labelled data for your text corpus in the form sentence(ex: I charge it at night and skip taking the cord with me because of the good battery life .) followed by '#### #### ####' as a separator and list of labels(include aspect/target word index in first list and the openion token index in the sentence followed by 'POS' for Positive and 'NEG' for negitive.) [([16, 17], [15], 'POS')] 16 and 17- battery life and in index 15, we have openion word "good". I am not sure if you have figures this out already and find some way to label the corpus.