This is frankly exciting, outside of the politics of it all, it always feel great to wake up and a new model being released, I personally will stay awake quite long tonight if GPT-5.5 drop in codex.
I don't find it exciting at all. I just feel anxiety about my career and my place in the world. I have a set of skills that I've developed over many years. I care about what I create. I consider it a craft. When I use my skills to solve a hard problem, I feel good about myself. When the AI does the work for me, I don't get that sense of accomplishment. I am seeing my value evaporate before my eyes.
I hate this stuff and I wish it had never been invented.
You might want to rethink this, think of this as the opportunity of a lifetime, the beginning of a new era, the same as the early Internet, where you do have the chance to set yourself for life now, this window is getting shorter and shorter, but you can't deny that you do have the potential NOW to thrive or start multiple businesses without much capital. Think also that the best thing in the end, is probably to build great things, regardless on how we build them, making the world progress.
Cursor is still the best coding environment and hardness. It's actually not really close. They are so good that they actually made Gemini usable.
The problem is they can't compete with Anthropic and OpenAI because they can't sell Opus and GPT at a discount to subscribers like OpenAI and Anthropic do with their subscriptions.
So they either need to build a competing model or slowly die.
I personally disagree on the first point. Claude code in a terminal with vim is much nicer. I just don’t see the need for the bloat of an IDE when the CLI versions work so damn well now.
> They are so good that they actually made Gemini usable
I think Gemini is best model out there, and it's not Cursor who you should praise. I use it with jetbrains junie. Vastly cheaper than claude, faster, produces better quality code, actually listens to your instructions, more accurate. I'm sure claude code cli has some cli magic that I'm missing out on, but having everything just work in a nice IDE (and llm to actually understand your symbol table) is like magic.
Tried 3.1 pro preview today a little bit, definitely blowing thru credits quicker, not sure about being better quality, but achieved all tasks perfectly.
IDK how Junie does it, but I spend less than $50 USD per month and I'm on it 30 hours per week.
That's why I'm so puzzled to why Composer doesn't work better when they have the ability to train it from scratch for their agent harness! Yet it still fails to apply edits, gets confused why it can't call some commands in its sandbox, the list goes on...
Are you using Claude Code? If so you have to update to the latest version. The system prompt in the older version of Claude Code don't work for Opus 4.7 and causes a bug similar to the one you are describing.
There’s roughly 8b people in the world and somewhere between 2-3b have never used the internet. If OpenAI manages to capture the 6b internet users growing at 100% per year, they have 3 years of user growth left max. Then what?
what makes gpt 5.4 bad to chat ? To me it seems smart and does the job albeit it’s a bit slow. I’m using it only/mostly with the “pro” /xhigh reasoning
The way it converses is the least human like out of all models. It communicates like its writing markdown documents instead of just conversing normally like every other model does. You ask a question and it spits out a design doc instead of just answering the question like a normal human would.
This story has been played out numerous times already. Anthropic (or any frontier lab) has a new model with SOTA results. It pretends like it's Christ incarnate and represents the end of the world as we know it. Gates its release to drum up excitement and mystique.
Then the next lab catches up and releases it more broadly
Then later the open weights model is released.
The only way this type of technology is going to be gated "to only corporations" is if we continue on this exponential scaling trend as the "SOTA" model is always out of reach.
I don't know how you can read the report and the companies involved and dismiss this as hot air. What incentive does the Linux Foundation have to hype up Mythos? What about Apple?
How can you read the description of the exploits and be like "yeah that's nbd?"
And the only reason OSS has ever caught up is because they simply distill Claude or GPT. The day the big players make it hard to distill (like Anthropic is doing here), OSS is cooked.
And that's a good thing, why would you want random skiddie hackers to have access to a cyber super weapon?
No, that’s a terrible thing and random skiddie hackers absolutely should. This is only a temporary state of insecurity as these vulnerability scanners come online.
If this stuff is open source and not gate kept, it will be standard practice to just run some LLM security analysis on every commit and software will no longer be vulnerable to these classes of attacks.
Keeping it behind closed doors also results in literal dead bodies on the ground. This isn’t the first time vulnerabilities have been hoarded and it never works out well for the greater good despite the original good intentions.
It also took many years to put capable computers in the hands of the general public, but it eventually happened. I believe the same will happen here, we're just in the Mainframe era of AI.
Yeah, but computers don't replace you. They are building AI to replace you. You think if these companies eventually achieve AGI that you are going to give you access to it? They are already gatekeeping an LLM because they don't trust you with it.
they better make billions directly from corporations, instead of giving them to average people who might get a chance out of poverty (but also bad actors using it to do even more bad things)
Anthropic's definition of "safe AI" precludes open-source AI. This is clear if you listen to what he says in interviews, I think he might even prefer OpenAI's closed source models winning to having open-source AI (because at least in the former it's not a free-for-all)
For those who haven't been keeping up with DigitalOcean. They are done with the "Developer Cloud" and now are trying to become another enterprise similar to AWS, GCP and Azure.
They were very debt heavy before the AI boom, and are just going to make it worst because I'm assuming they aren't raising 800m to pay of that debt.
You should definitely take that into account if you are or plan on using them.
I think this post should be directed to every Typescript developer.
I think a lot of this is just Typescript developers. I bet if you removed them from the equation most of the problem he's writing about go away. Typescript developers didn't even understand what React was doing without agent, now they are just one-shot prompting features, web apps, clis, desktop apps and spitting it out to the world.
The prime example of this is literally Anthropic. They are pumping out features, apps, clis and EVERY single one of them release broken.
This is my theory. They don't want other harnesses to use this because it costs them more. I don't know exactly how OpenCode works, but I'm assuming when people are using this plugin they are mostly using Opus for everything while Claude Code really only uses Opus for writing the actual code. It uses Haiku and Sonnet for almost all of the tasks outside of writing code.
So it hard for them to control and understand the costs of subscriptions if people are using them on different hardnesses that do things that they have no control over.
Yeah, but that's just the model the main agent uses. The subagents aren't Opus. They are Haiku and Sonnet. Most of the token heavy work is offloaded to subagents because of this.
Since Feb when we got Gemini 3.1, Opus 4.6, and GPT-5.3-Codex we have seen GPT-5.4 and GPT-5.5 but only Opus 4.7 and no new Gemini model.
Both of these are pretty decent improvements.
reply