I still haven't found useful "memory". It's either an agents.md with a high level summary, which is fairly useless for specific details (eg "editing this element needs to mark this other element as a draft") or something detailed and explaining the nitty gritty, which seems to give too much detail such that it gets ignored, or detail from one functional area contaminates the intended changes in another functional area.
The only approach I've found that works is no memory, and manually choosing the context that matters for a given agent session/prompt.
Even as someone highly interested in memory I don’t see it as a useful tool for coding. The source of truth for what a repo does or should do is the repo itself.
What you’re describing sounds more like code review guidelines, which can be explicitly brought into context at specific times during a change. A memory system is both too complex and less accurate for this
Yeah I feel the same way. Wonder when/if we'll get continual learning from these models. I feel like they are smart enough already but their lack of real memory makes them a pain to deal with.
Google Gemini does this sort of thing. External to the model k presume. And it's very annoying.
A friend told me he would like Claude to remember his personality, which is exactly what Gemini is trying to do.
A machine pretending to be human is disturbing enough. A machine pretending to understand you will spiral very far into spitting out exactly what we want to read.
The only approach I've found that works is no memory, and manually choosing the context that matters for a given agent session/prompt.