Search code examples
python-3.xhuggingface-transformers

query() of generator `max_length` being succeeded


Goal: set min_length and max_length in Hugging Face Transformers generator query.

I've passed 50, 200 as these parameters. Yet, the length of my outputs are much higher...

There's no runtime failure.

from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2')
set_seed(42)

def query(payload, multiple, min_char_len, max_char_len):
    print(min_char_len, max_char_len)
    list_dict = generator(payload, min_length=min_char_len, max_length=max_char_len, num_return_sequences=multiple)
    test = [d['generated_text'].split(payload)[1].strip() for d in list_dict]
    for t in test: print(len(t))
    return test

query('example', 1, 50, 200)

Output:

50 200
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
1015

Solution

  • Explanation:

    As explained by Narsil on Hugging Face 🤗 Transformers Git Issue response

    Models, don't ingest the text one character at a time, but one token at a time. There are different algorithms to achieve this but basically "My name is Nicolas" gets transformers into ["my", " name", " is", " nic", "olas"] for instance, and each of those tokens have a number.

    So when you are generating tokens, they can contain themselves 1 or more characters (usually several and almost any common word for instance). That's why you are seeing 1015 instead of your expected 200 (the tokens here have an average of 5 chars)

    Solution:

    As I resolved...

    Rename min_char_len, max_char_len to min_tokens, max_tokens and simply reduce their values by a ~1/4 or 1/5.