Search code examples
python-3.xlanguage-modelhuggingface-transformers

while running huggingface gpt2-xl model embedding index getting out of range


I am trying to run hugginface gpt2-xl model. I ran code from the quickstart page that load the small gpt2 model and generate text by the following code:

from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch

tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')

generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None

for i in range(100):
    print(i)
    output, past = model(context, past=past)
    token = torch.argmax(output[0, :])

    generated += [token.tolist()]
    context = token.unsqueeze(0)

sequence = tokenizer.decode(generated)

print(sequence)

This is running perfectly. Then I try to run gpt2-xl model. I changed tokenizer and model loading code like following: tokenizer = GPT2Tokenizer.from_pretrained("gpt2-xl") model = GPT2LMHeadModel.from_pretrained('gpt2-xl')

The tokenizer and model loaded perfectly. But I a getting error on the following line:

output, past = model(context, past=past)

The error is:

RuntimeError: index out of range: Tried to access index 204483 out of table with 50256 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418

Looking at error it seems that the embedding size is not correct. So I write the following line to specifically fetch the config file of gpt2-xl:

config = GPT2Config.from_pretrained("gpt2-xl")

But, here vocab_size:50257 So I changed explicitly the value by:

config.vocab_size=204483

Then after printing the config, I can see that the previous line took effect in the configuration. But still, I am getting the same error.


Solution

  • This was actually an issue I reported and they fixed it. https://github.com/huggingface/transformers/issues/2774