Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: SIGABRT on macOS 14.7 (23H124) #565

Open
Omoeba opened this issue Sep 19, 2024 · 0 comments
Open

Bug: SIGABRT on macOS 14.7 (23H124) #565

Omoeba opened this issue Sep 19, 2024 · 0 comments

Comments

@Omoeba
Copy link

Omoeba commented Sep 19, 2024

Contact Details

No response

What happened?

I encountered the following error with llava-v1.5-7b-q4.llamafile, Phi-3-mini-4k-instruct.F16.llamafile, and the llamafile command.

Version

llamafile v0.8.13

What operating system are you seeing the problem on?

Mac

Relevant log output

error: Uncaught SIGABRT (SI_0) on MBP.lan pid 23491 tid 23491
 /usr/local/bin/llamafile
 No such file or directory
 Darwin Cosmopolitan 3.9.1 MODE=aarch64; Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:04 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6030 MBP.lan 23.6.0
 cosmoaddr2line /usr/local/bin/llamafile 1882115d0 d723000188156a30 b074000188155d20 3b1e80019270f194 dc048001926ebdb0 d7388001926dac3c c22b8001925bff18 793e8001027d70c4 1027d6838 800098950 8001109d0
 0000000000000000 x0 b70b0653881a5a89 x8  0000000000000148 x16 00000001eedca000 x24
 0000000000000000 x1 b70b0652781055c9 x9  00000001fa7a2720 x17 00000001f5bb6000 x25
 0000000000000000 x2 000000000000000a x10 0000000000000000 x18 00000001927508d0 x26
 0000000000000000 x3 0000000000000000 x11 0000000000000006 x19 0000000000000000 x27
 0000000000000073 x4 0000000000000037 x12 00000001f00a0f40 x20 0000000802a817c0 x28
 000000000000002e x5 0000000146e06780 x13 0000000000000103 x21 000000016d829f70 x29
 0000000000000000 x6 01000001f00b8589 x14 00000001f00a1020 x22 0000000188249c20 x30
 0000000000000000 x7 00000001f00b8588 x15 ffffffffffffffff x23 000000016d829f50 x31
 000000016d829f50 sp 1882115d0 pc NULL-2011302928
 000000016d829f70 fp d723000188156a30 lr NULL-2012067760
 000000016d829fb0 fp b074000188155d20 lr NULL-2012071104
 000000016d82a010 fp 3b1e80019270f194 lr NULL-1838297164
 000000016d82a030 fp dc048001926ebdb0 lr NULL-1838441520
 000000016d82a100 fp d7388001926dac3c lr NULL-1838511524
 000000016d82a140 fp c22b8001925bff18 lr NULL-1839669960
 000000016d82a170 fp 793e8001027d70c4 lr g_events+37283940
 000000016d82a2b0 fp 1027d6838 lr g_events+37281752
 000000016d82a2e0 fp 800098950 lr ggml_backend_metal_init+64
 000000016d82a2f0 fp 8001109d0 lr llama_new_context_with_model+1536
 000000016d82ad80 fp 8000d7df0 lr llama_init_from_gpt_params(gpt_params&)+228
 000000016d82b030 fp 800018808 lr llama_server_context::load_model(gpt_params const&)+220
 000000016d82b110 fp 800013530 lr server_cli(int, char**)+2704
 000000016d82ceb0 fp 800004b34 lr main+356
 000000016d82df20 fp 800003fcc lr cosmo+1136
 000000016d82df80 fp 800000140 lr _start
[1]    23491 abort      llamafile -m Phi-3-mini-4k-instruct.F16.llamafile
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant