Search code examples
nlphuggingface-transformersallennlp

Google mT5-small configuration error because number attention heads is not divider of model dimension


The configuration file for the HuggingFace google/mt5-small Model (https://huggingface.co/google/mt5-small)

defines

{
...
  "d_model": 512,
...
  "num_heads": 6,
...
}

Link to the config file: https://huggingface.co/google/mt5-small/resolve/main/config.json

Question:

As far as I understood, the number of attention-head should be a divider of the model dimension. This is clearly not true in this config file.

Do I misunderstand how self-attention is applied in mT5?

When I use the AllenNLP model (https://github.com/allenai/allennlp-models/blob/main/allennlp_models/generation/models/t5.py) as sequence-to-sequence model, I receive an error message

Summary:

allennlp.common.checks.ConfigurationError: The hidden size (512) is not a multiple of the number of attention heads (6)

Full

Traceback (most recent call last):
  File "/snap/pycharm-professional/269/plugins/python/helpers/pydev/pydevd.py", line 1500, in _exec
    runpy._run_module_as_main(module_name, alter_argv=False)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/__main__.py", line 50, in <module>
    run()
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/__main__.py", line 46, in run
    main(prog="allennlp")
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/__init__.py", line 123, in main
    args.func(args)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 112, in train_model_from_args
    train_model_from_file(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 178, in train_model_from_file
    return train_model(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 254, in train_model
    model = _train_worker(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 490, in _train_worker
    train_loop = TrainModel.from_params(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 652, in from_params
    return retyped_subclass.from_params(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 686, in from_params
    return constructor_to_call(**kwargs)  # type: ignore
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 766, in from_partial_objects
    model_ = model.construct(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 82, in construct
    return self.constructor(**contructor_kwargs)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 66, in constructor_to_use
    return self._constructor.from_params(  # type: ignore[union-attr]
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 652, in from_params
    return retyped_subclass.from_params(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 686, in from_params
    return constructor_to_call(**kwargs)  # type: ignore
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp_models/generation/models/t5.py", line 32, in __init__
    self.t5 = T5Module.from_pretrained_module(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/transformer_module.py", line 251, in from_pretrained_module
    model = cls._from_config(config, **kwargs)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/t5.py", line 852, in _from_config
    return cls(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/t5.py", line 783, in __init__
    self.encoder: T5EncoderStack = encoder.construct(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 82, in construct
    return self.constructor(**contructor_kwargs)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/t5.py", line 600, in basic_encoder
    self_attention=block_self_attention.construct(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 82, in construct
    return self.constructor(**contructor_kwargs)
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 66, in constructor_to_use
    return self._constructor.from_params(  # type: ignore[union-attr]
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 686, in from_params
    return constructor_to_call(**kwargs)  # type: ignore
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/attention_module.py", line 471, in __init__
    super().__init__(
  File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/attention_module.py", line 91, in __init__
    raise ConfigurationError(
allennlp.common.checks.ConfigurationError: The hidden size (512) is not a multiple of the number of attention heads (6)

Solution

  • This is a very good question, and shows a common misconception about Transformers, stemming from an (unfortunate) formulation in the original Transformers paper. In particular, the authors write the following in Section 3.2.2:

    In this work, we employ h = 8 parallel attention layers, or heads. For each of these we use d_k = d_v = d_(model) / h = 64. [...]

    Note that the equality of d_k/d_v = d_(model) is not strictly necessary; it is only important that you do match the final hidden representation (d_(model)) after the Feed-Forward portion of each layer. Specifically for mt5-small, the authors actually use an internal dimension of 384 which is simply the product of parameters d_kv * num_heads = 64 * 6.

    Now, the problem is that many libraries make a similar assumption of the enforced relation between d_kv and d_(model), because it saves some implementation effort that most people won't use anyways. I suspect (not super familiar with AllenNLP) that they have made similar assumptions here, which is why you cannot load the model.

    Also, to clarify this, here is a peek at the modules of a loaded mt5-small:

    T5Block(
        (layer): ModuleList(
            (0): T5LayerSelfAttention(
            (SelfAttention): T5Attention(
                (q): Linear(in_features=512, out_features=384, bias=False)
                (k): Linear(in_features=512, out_features=384, bias=False)
                (v): Linear(in_features=512, out_features=384, bias=False)
                (o): Linear(in_features=384, out_features=512, bias=False)
            )
            (layer_norm): T5LayerNorm()
            (dropout): Dropout(p=0.1, inplace=False)
            )
            (1): T5LayerFF(
            (DenseReluDense): T5DenseGatedGeluDense(
                (wi_0): Linear(in_features=512, out_features=1024, bias=False)
                (wi_1): Linear(in_features=512, out_features=1024, bias=False)
                (wo): Linear(in_features=1024, out_features=512, bias=False)
                (dropout): Dropout(p=0.1, inplace=False)
            )
            (layer_norm): T5LayerNorm()
            (dropout): Dropout(p=0.1, inplace=False)
            )
        )
    )
    
    

    You can get the full model layout by simply calling list(model.modules())