Search code examples
Does peft train newly initialized weights?...

deep-learninghuggingface-transformerspeft

Read More
Do I have to write custom AutoModel transformers class in case "TypeError: NVEmbedModel.forward...

deep-learninghuggingface-transformerslarge-language-modelpeft

Read More
Target modules for applying PEFT / LoRA on different models...

nlphuggingface-transformershuggingfacefine-tuningpeft

Read More
How to fix error `OSError: <model> does not appear to have a file named config.json.` when loa...

pytorchnlphuggingface-transformerslarge-language-modelpeft

Read More
AttributeError: 'TrainingArguments' object has no attribute 'model_init_kwargs'...

pythonnlphuggingface-transformerslarge-language-modelpeft

Read More
Can I dynamically add or remove LoRA weights in the transformer library like diffusers...

pythonhuggingface-transformershuggingfacepeft

Read More
Llama QLora error: Target modules ['query_key_value', 'dense', 'dense_h_to_4h&#3...

pythonquantizationlarge-language-modelpeft

Read More
Diffrence between gguf and lora...

large-language-modelquantizationpeft

Read More
PyTorch: AttributeError: 'torch.dtype' object has no attribute 'itemsize'...

pythonpytorchdatabrickshuggingfacepeft

Read More
How to resolve ValueError: You should supply an encoding or a list of encodings to this method that ...

nlphuggingface-transformershuggingface-tokenizerspeft

Read More
I want to merge my PEFT adapter model with the base model and make a fully new model...

pythonartificial-intelligencehuggingface-transformerspeft

Read More
FastAPI custom Validator Error: FastAPI/Pydantic not recognizing custom validator functions (Runtime...

fastapihuggingface-transformerspydanticpeft

Read More
How to load a fine-tuned peft/lora model based on llama with Huggingface transformers?...

pythonhuggingface-transformersllama-indexpeft

Read More
BackNext