Search code examples
deep-learninglanguage-model

fastai: ValueError: __len__() should return >= 0


While running the following program - https://rawgit.com/sizhky/eef1482e63387df8e9e045ac1e5a0ce8/raw/bdbebafaab21739a27f6bf32e83da1557919b44b/lm.html

I'm unable to call learner.fit as it throws the above error.

Specifically,
I'm trying to train a language model taking a text file and converting it to a LanguageModelData and feeding it to an RNN via get_model

md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)
learner = md.get_model(opt_fn, em_sz, nh, nl, dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05) learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1) learner.clip=0.3 learner.fit(3e-3, 4)


ValueErrorTraceback (most recent call last)
<ipython-input-7-579772ee6693> in <module>()
----> 1 learner.fit(3e-3, 4)

/nfsroot/data/home/yeshwanth/misc/fastai/fastai/courses/dl1/practice/fastai/learner.py in fit(self, lrs, n_cycle, wds, **kwargs)
    285         self.sched = None
    286         layer_opt = self.get_layer_opt(lrs, wds)
--> 287         return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs)
    288 
    289     def warm_up(self, lr, wds=None):

/nfsroot/data/home/yeshwanth/misc/fastai/fastai/courses/dl1/practice/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
    232             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
    233             swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
--> 234             swa_eval_freq=swa_eval_freq, **kwargs)
    235 
    236     def get_layer_groups(self): return self.models.get_layer_groups()

/nfsroot/data/home/yeshwanth/misc/fastai/fastai/courses/dl1/practice/fastai/model.py in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, **kwargs)
    159 
    160         if not all_val:
--> 161             vals = validate(model_stepper, cur_data.val_dl, metrics, seq_first=seq_first)
    162             stop=False
    163             for cb in callbacks: stop = stop or cb.on_epoch_end(vals)

/nfsroot/data/home/yeshwanth/misc/fastai/fastai/courses/dl1/practice/fastai/model.py in validate(stepper, dl, metrics, seq_first)
    220     stepper.reset(False)
    221     with no_grad_context():
--> 222         for (*x,y) in iter(dl):
    223             y = VV(y)
    224             preds, l = stepper.evaluate(VV(x), y)

/nfsroot/data/home/yeshwanth/misc/fastai/fastai/courses/dl1/practice/fastai/nlp.py in __next__(self)
    135 
    136     def __next__(self):
--> 137         if self.i >= self.n-1 or self.iter>=len(self): raise StopIteration
    138         bptt = self.bptt if np.random.random() < 0.95 else self.bptt / 2.
    139         seq_len = max(5, int(np.random.normal(bptt, 5)))

ValueError: __len__() should return >= 0

Solution

  • Looks like your data is in 1 .txt file, and LanguageModelData.from_text_files() expects to deal with folders containing many files

    UPD: solved! There must be at least bs number files in each folder! Otherwise LanguageModelLoader for LanguageModelData has its data empty.

    I face the same error during validation, and the problem seems to be in the way LanguageModelData() constructed dataset:

    for (*x, y) in md.trn_dl:
        set_trace()
    

    x should be a pytorch tensor or shape (smth, batch_size), and y - a 1-dimensional tensor of size smth*batch_size. Same with md.val_dl. In your case it is likely that there is no (*x, y). Something is very wrong with data. len(md.trn_dl) and len(md.val_dl) must not equal to 0.

    I'll appreciate any solutions, thank you for the question!

    Also, newer version of language model drops torchtext and makes it easier to debug: https://github.com/fastai/fastai/blob/master/courses/dl2/imdb.ipynb