I read the first link and was curious what the outcome of the case was, since the article was released in 2021. It seems that Jeanne was declared alive again in 2023, which is the most recent public reporting on the case:
Last time (after years of doing it manually every once in a while) I just gave codex an ephemeral restricted Cloudflare API Token / key / whatever, the screenshot, and it set up all the records on its own.
Offering something like a local Gemma 4 (though apparently not what we get here) to web apps via a browser API could change UX quite drastically. Possibly for the better. We had a project where it could have been nice.
> Including a 4gb is a negligible amount of space for current hardware and Chrome is not known as the browser to run on resource constrained devices.
4gb definitely isn’t a negligible amount of space on most people’s devices.
The quite successful it would seem MacBook Neo has 256GB of storage in its base configuration.
A MacBook Air and a basic sub $1000 Dell laptop starts at 512GB.
> To put 4gb in context, I currently have 2 tabs open that nearly take up 4gb.
You are conflating disk and memory.
> The fact Chrome also has a way to disable this makes it kind of a nothingburger in my opinion.
There’s a reason they picked an opt-out model for this, and not an opt-in approach.
But I also see the point in it. We recently did a hackathon, and I considered relying on Gemma 4 for privacy considerations. The local model could interpret the user’s natural language request and derive less privacy revealing requests to form based on that.
But then, a web app that shows people a loading screen while it downloads a 4GB model probably wouldn’t be a best-selling UX.
> I never conflated anything. I said it's a neglible amount of space for current hardware, which I still believe.
>
> If anything, the fact that I think the amount of space is acceptable for the amount of ram a modern laptop has exaggerates the point.
How does the memory usage of your browser tabs relate to the amount of disk space taken up by the downloaded models?
> That's the approach they take for most of their features.
And there’s a reason for that.
Same one the EU forced companies to make consent to marketing and consumer data sharing opt-in and not opt-out: we have a bias.
> Which seems to be the motivation of having these local models embedded in the browser's available resources
Sooo… we are basically agreeing on the fact it could prove to be useful? In the long run, if and when they decide to pick a model that isn’t half brain dead (apparently it’s based on Gemma 3).
> How does the memory usage of your browser tabs relate to the amount of disk space taken up by the downloaded models?
I am talking about space in general. Data size. The relation of data stored in one place vs another is the fact they both store *data*. The components to store data can be cheaper or more expensive based on how it's architected. The fact I still see 4GB as an acceptable cost in ram exaggerates the point because it shows that I'm okay with 4GB being consumed in a relatively expensive component used for data storage (meaning I'd obviously be okay with it being stored on hard disk).
> if and when they decide to pick a model that isn’t half brain dead (apparently it’s based on Gemma 3)
Gemma 3 is far from brain dead and can be used for a variety of different tasks, translation being one I've personally used it for.
One could think that.
But VSCode is the one that occasionally failed to simply render text.
No idea what happened these handful of times, but the UI was just completely screwed up, as if it were one of these "scratch to reveal" games, but with the file’s content (and unresponsive, obviously).
Edit: As a kind someone pointed out, I made the wrong assumption that the first response was entirely Anthropic’s, and not the author.
~~I mean, the worse part is the gif at the end of the message.~~
~~What are they even trying to do? What are they trying to convey? It just feels like being given the finger and getting my face rubbed in it on top of that.~~
That’s pretty much it. I’d say minimal supervision, and I can’t say whether or not it is doable or desirable, but maybe I’m not bullish enough. Or too much.
There’s a joke about dark factories, by the way:
> The Factory of the Future Will Have Only Two Employees, a Man and a Dog.
> - The human feeds the dog.
>- The dog makes sure no one touches the equipment.
Because then they lose vertical integration and the extra ability it grants to tune settings to reduce costs / token use / response time for subscription users.
Or improve performance and efficiency, if we’re generous and give them the benefit of the doubt.
It makes sense, in a way. It means the subscription deal is something along the lines of fixed / predictable price in exchange for Anthropic controlling usage patterns, scheduling, throttling (quotas consumptions), defaults, and effective workload shape (system prompt, caching) in whatever way best optimises the system for them (or us if, again, we’re feeling generous) / makes the deal sustainable for them.
It may be (but I wouldn’t know) that some of other changes not covered here reduced costs on their side without impacting users, improving the viability of their subscription model. Or maybe even improved things for users.
I’d really appreciate more transparency on this, and not just when things fail.
But I’ve learned my lesson. I’ve been weening off Claude for a few weeks, cancelled my subscription three weeks ago, let it expire yesterday, and moved to both another provider and a third-party open source harness.
Nothing you wrote makes sense. The limits are so Anthropic isn't on a loss. If they can customize Claude using Code, I see no reason why they couldn't do so with other wrappers. Other wrappers can also make use of cache.
If you worry about "degraded" experience, then let people choose. People won't be using other wrappers if they turn out to be bad. People ain't stupid.
By imposing the use of their harness, they control the system prompt:
> On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality, and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7
They can pick the default reasoning effort:
> On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode
They can decide what to keep and what to throw out (beyond simple token caching):
> On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6
It literally is all in the post.
I don't worry about anything though. It's not my product. I don't work for Anthropic, so I really couldn't care less about anyone else's degraded (or not) experience.
Evidently, all these things you just dismissed matter, else all the changes I quoted from the original post wouldn’t have affected anyone, or half as many people, or half as much. Anthropic wouldn’t have had any complaints to investigate, the article promoting this entire thread wouldn’t exist, and we wouldn’t be having this very conversation.
Defaults matter. A large share of people never change them (status quo bias, psychological inertia). Having control over them (and usage quotas) means Anthropic can control and fine-tune what this fixed subscription costs them.
And evidently (re, the original article), they tried to do so.
> Defaults matter. A large share of people never change them (status quo bias, psychological inertia). Having control over them (and usage quotas) means Anthropic can control and fine-tune what this fixed subscription costs them.
Allowing third party wrappers doesn't mean Claude Code would cease to exist. The opposite actually, Claude Code would be the default.
People dissatisfied with Code would simply use other wrappers. I call it a win-win. Don't see how Anthropic would be on a lose here, they would still retain the ability to control the defaults.
Except one of the major other wrappers was pi, through OpenClaw. With countless hundreds of thousands of instances running every hour on that heartbeat.
I have no idea what the share of OpenClaw instances running on pi was, or third-party wrappers in general, but it was obviously large enough that Anthropic decided they had to put an end to it.
Conversely, from the latest developments, it would seem they are perfectly fine with people running OpenClaw with Claude models through Claude Code’s programmatic interface using subscriptions.
But in the end, this, my take, your take, is all conjecture. We are both on the outside looking in.
eg https://www.theguardian.com/lifeandstyle/2021/jul/03/they-sa...
Edit: and a few others https://news.ycombinator.com/item?id=48060857
The Romanian man Guardian article has a great and horrible quote:
> The court told him he was too late, and would have to remain officially deceased.
reply