Hacker Newsnew | past | comments | ask | show | jobs | submit | efficax's commentslogin

adding a button means osascript driven malware could approve itself and you might not even see it happen

Hmm, I don't think that's true. How is the osascript running without getting past Gatekeeper itself? Also, how is it using UI control without TCC approval?

zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.

What language doesn't allow memory leaks?

There are two kinds of memory leaks: forgotten manual freeing (all references are gone, but allocation is not) and forgetting to get rid of references that keeps an allocation alive. Both are a kind of logical error, but the first is mostly possible in languages with manual memory management. The second one is a universal logical error (only programmer knows which live references are really needed).

In the Haskell community I’ve seen the second kind called “space leaks.” I don’t see it used much outside that community but I like the term and use it when talking about other languages as well.

Rust allows reference-counting cycles, right?

I suppose all languages allow them, depending on how you define a memory leak. Garbage collected languages generally prevent them, since you never have to explicitly free memory, but if there are reference cycles, that memory can never be freed automatically. Rust has the same problem, but since rust uses lifetimes to understand when to drop things, many people expect that this will mean there can be no memory leaks, but leaks are not considered a correctness or safety issue (oom is a panic and panic is safe!). Not only explicitly possible (through Box::leak) but also possible by mistake (again, usually through reference cycles).

> but if there are reference cycles, that memory can never be freed automatically.

Many garbage collection algorithms can deal with cycles.


layoffs are winning in the short term. investors love them, and if you cut staff but keep revenue that’s profit.

this is an interesting idea and i might try it with something smaller. there are more than 15,000 commits to bun, so you’d have to have some sort of way to operate on groups of commits in one prompt to get that done without thousands and thousands of api requests

qwen3.6 does a good job locally except it can take 20-30 minutes to respond to a prompt on a mac studio with 32gb of ram.

Apple Silicon before the M4 does not have matmul instructions which causes the prompt processing to be very slow. It's quite different on the M5, much like using a nvidia GPU

Yea you probably do want to use a GPU for models of that size.

I also wonder what quantization you are using? If you haven't tried other quants I really would


This is qwen3.6:27b-coding-nvfp4. It's only an M1. If they ever ship an M5 studio with 96GB of ram, that's my next upgrade path for the local llm experiments.

You can get work done with them if you have a harness that can drive outcomes without needing feedback (I've been building a tdd red to green agent harness lately that is very effective if given a good plan upfront). So if you can stand waiting a few days to see results that would only take hours with a model deployed to frontier nvidia hardware, you can get results this way.


The time delay is the real issue. Much much slower wall clock time.

Why are you connecting your agent to a database with write access? Are you out of your mind.


what if writing a blog post without ai was easy?


It's just computation, which the world already depended on. we're in the mainframe era, but "AI" will go personal, and on-device.


Good analogy, but there's a key difference: mainframes were an institutional dependency, whereas the world's reliance on LLMs is consumer driven, ubiquitous, an uncapped (e.g. spend more on the same "loops"). Completely agree on the second point, though powerful local models are the inevitable next step, and they are arriving fast.


gemma4 and qwen3.6 are pretty capable but will be slower and wrong more often than the larger models. But you can connect gemma4 to opencode via ollama and it.. works! it really can write and analyze code. It's just slow. You need serious hardware to run these fast, and even then, they're too small to beat the "frontier" models right now. But it's early days


If the apple silicon keeps making the gains it makes, a mac studio with 128gb of ram + local models will be a practical all-local workflow by say 2028 or 2030. OpenAI and Anthropic are going to have to offer something really incredible if they want to keep subscription revenue from software developers in the near future, imo


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: