I tried to load Llama-2-7b-hf
LLM with QLora
with the following code:
model_id = "meta-llama/Llama-2-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True) # I have permissions.
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config, device_map="auto", use_auth_token=True)
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=[
"query_key_value",
"dense",
"dense_h_to_4h",
"dense_4h_to_h",
],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config) # got the error here
I got this error:
File "/home/<my_username>/.local/lib/python3.10/site-packages/peft/tuners/lora.py", line 333, in _find_and_replace
raise ValueError(
ValueError: Target modules ['query_key_value', 'dense', 'dense_h_to_4h', 'dense_4h_to_h'] not found in the base model. Please check the target modules and try again.
How can I solve this? Thank you!
the strings in target_modules are different from models to models, you can debug the
model = AutoModelForCausalLM.from_pretrained(model_id)
and look into the model then you can find what are the linear layers names and input them into the target_modules