Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would you share some additional details? CPU, amount of unified memory / VRAM? Tok/s with those?




MBP M4 Max 64MB - haven't measured the tokens/sec, feels slower than Claude, but not unbearably

It's not yet perfect, my sense is just that it's near the tipping point where models are efficient enough that running a local model is truly viable




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: