Search code examples
large-language-modelhuggingfacehttp-errorllama

Error installing Meta-Llama-3-70B model from Hugging Face Hub


I'm trying to load the Meta-Llama-3-70B model from the Hugging Face Hub using the Transformers library in Python, but I'm encountering the following error:

OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like meta-llama/Meta-Llama-3-70B is not the path to a directory containing a file named config.json.  Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

Here's the code I'm using:

import torch
import transformers

model_id = "meta-llama/Meta-Llama-3-70B"
pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Hey how are you doing today?")

I've granted access to the Meta-Llama-3-70B model on the Hugging Face website, but I'm still encountering this error. I've checked my internet connection, and it seems to be working fine.

Can someone help me understand what might be causing this issue and how to resolve it? Are there any additional steps I need to take to successfully load and use the Meta-Llama-3-70B model from the Hugging Face Hub?


Solution

  • In case you are facing the same problem even after getting the permission for the gated model, then follow these:

    First, get the Hugging Face access token from here

    then run this code:

    from huggingface_hub import login
    
    login(token='xxxxxxxxxxxxxxxxxxxxxxx')
    

    Replace those x's with your access token

    And then run the model