I tried replicating a simple Python code to create a small LLM model.
I have macOS M1 machine. I created a separate environment where I installed Pytorch and llama-cpp-python. The code:
from llmflex import LlmFactory
# Load the model from Huggingface
try:
# Instantiate the model with the correct identifier
model = LlmFactory("TheBloke/OpenHermes-2.5-Mistral-7B-GGUF")
# Configure parameters directly if the object itself is callable
#llm = model(temperature=0.7, max_new_tokens=512)
# Disable Metal and run on CPU
llm = model(temperature=0.7, max_new_tokens=512, use_metal=False)
# Generate a response
response = llm.generate("Hello, how are you?")
print(response)
except AttributeError as e:
print(f"Attribute error: {e}")
except AssertionError as e:
print(f"Assertion error: {e}")
except Exception as e:
print(f"An error occurred: {e}")
As you can see, I tried with and without Metal, but I received the same error (the last portion of the output):
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32002
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q2_K
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 2.87 GiB (3.41 BPW)
llm_load_print_meta: general.name = teknium_openhermes-2.5-mistral-7b
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 32000 '<|im_end|>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: mem required = 2939.69 MiB
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB
llama_build_graph: non-view tensors processed: 676/676
ggml_metal_init: allocating
ggml_metal_init: found discrete device: Apple M1
ggml_metal_init: picking device: Apple M1
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml- metal.metal” couldn’t be opened because there is no such file." UserInfo=. {NSFilePath=ggml-metal.metal, NSUnderlyingError=0x600002eeb2a0 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
llama_new_context_with_model: ggml_metal_init() failed
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
Assertion error:
Obviously, something is wrong, but I cannot pinpoint the error because I am new to this.
I do not want to use CUDA; I want to use the CPU.
Please, help
Here is some additional information: https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF
I guess it is an issue when using it with MacOS M1. Metal has some problems that have not been fully resolved. I am closing this, but if there is an answer, please DM me