Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I failed to run in LM Studio on M5 with 32gb at even half max context. Literally locked up computer and had to reboot.

Ran gemma-4-26B-A4B-it-GGUF:Q4_K_M just fine with llama.cpp though. First time in a long time that I have been impressed by a local model. Both speed (~38t/s) and quality are very nice.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: