Search code examples
pythondatasethuggingface-transformershuggingface-datasets

NonMatchingSplitsSizesError loading huggingface BookCorpus


I want to load bookcorpus like this:

train_ds, test_ds = load_dataset('bookcorpus', split=['train', 'test']),

however, get the following error:

Traceback (most recent call last):             
  File "<stdin>", line 1, in <module>
  File "/home/marcelbraasch/.local/lib/python3.8/site-packages/datasets/load.py", line 1627, in load_dataset
    builder_instance.download_and_prepare(
  File "/home/marcelbraasch/.local/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
    self._download_and_prepare(
  File "/home/marcelbraasch/.local/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare
    verify_splits(self.info.splits, split_dict)
  File "/home/marcelbraasch/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
    raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=4853859824, num_examples=74004228, dataset_name='bookcorpus'), 'recorded': SplitInfo(name='train', num_bytes=2982081448, num_examples=45726619, dataset_name='bookcorpus')}]

I want to proceed to save this to disk as I don't want to download this every time I use it. What causes this error?


Solution

  • BookCorpus is no longer publicly available.

    Here is a work around:

    https://github.com/soskek/bookcorpus