Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I came away with a very different conclusion, which is that the fact that such “bad” software can be so resoundingly successful for a business, yet be so odious to experienced human reviewers, means that it was the right engineering choice to go fast, rather than “do things right” by emphasizing code quality.

What good would it truly be if a 3K line function is split into 8 modules? It’ll be neater and more comprehensible to a human reader. More debuggable, definitely.

But given the business problem the have: winner takes all of a massive market, first mover wins, — the right move is to throw the usual rulebook about quality software out the window, and double down on the bets of the company, that AI will make human code engineering less and less necessary very quickly.

It turned out incredibly well despite the “bad” engineering — which in this case, I really count as good engineering.

 help



It was "good engineering" only because this was a new kind of product and the customers were not aware yet of what they should get for the money they pay.

The bad quality of the Claude Code program has resulted in increased costs for the customers (very high memory consumption, slow execution, higher and sometimes much higher token count than necessary), and even for Anthropic, but nobody was aware of this, because there was no previous experience to compare with.

This kind of sloppy vibe coding works only when there is no competition. When the competition comes with something much more efficient, e.g. pi-dev, the inefficient application will be eliminated.

Anthropic attempts to protect their badly written program by forbidding its customers to use other coding harnesses, but this will not be able to protect them from competition for long.

If you are the first on a new market without competitors, then indeed time-to-market matters more than anything else and the sloppiest vibe-coded application is the best if it can be delivered immediately.

However, one must plan to replace that with a better and more efficient application ASAP, because the advantage of being the first is only temporary.


I guess that now people are more aware on how bad their software is, we cannot blame the "super intelligent ai" to not be ready yet.

The amount of regex matching people found os staggering


Your kind of critique forgets the tradeoff between getting something out quick vs doing something slowly and nicely.

If you choose slowly, you are depriving your users of the value from your app for a long time. It’s not as clear a choice as you think


Have you read my entire comment?

I have already said that sometimes time-to-market is the most important, so that should be the priority, but the advantage of delivering immediately the application is only temporary, so you must improve quickly your first possibly vibe-coded implementation, otherwise better alternatives will be delivered by others.

Claude Code is an obvious example of this, because it has practically opened a new market, but because it has remained a mess now there are better alternatives.

What is wrong is not generating instantly a proof-of-concept application that barely works and using it in the beginning, but continuing to build upon that even after you had enough time to rewrite it.


They would have to trade off building new features for refactoring. It seems they consider shipping more important, and that as long as the existing features mostly work, that’s good enough. As customers, we have to ask: do we agree? Do we want features over stability? I think the answer is yes, at least for me (and the market seems to agree). But it’s certainly a risk Anthropic is taking.

I will note that this strategy only really makes sense because Anthropic controls the compute. If open-source harnesses could also use Claude max plans, then they’d have to focus much more on stability and quality, or just build an open-source harness themselves, or probably better yet, get out of the harness-building business altogether. So they’re gambling on staying ahead of open-source models, which seems like it’s been a good bet so far, but we’ll see.


I understand your thoughts, but I don't think they will ever make that existing code good. Not sure if they want to and not sure if they can.

The advantage is definitely not temporary as can be seen by how many people use codex vs Claude code. Claude code works just fine and I have zero issues at usability level. Could they have more features? Yes but I see that they are shipping quick. It is obvious to me that they took the right balance between time to market and clan code

These discussions are insane. No agentic coding product, including Claude Code, has existed for a full year yet, but people are stating with extreme confidence who has "won" the competition to be the leading or even only provider of this kind of product and having heated arguments over whether or not we can consider the current state of the market to be temporary or not. Imagine having this same argument about Lycos versus Yahoo! in 1995.

But it is working primarily because of the Max subscription model. If I could use my Max subscription to get $5000 worth of tokens for only $200 via OpenCode or Pi, I would drop Claude Code today. I think a lot of people (and enterprises) are of a similar opinion. Not saying Claude Code would have no users, but its dominance would be greatly diminished.

Its the same for a lot of bundled things, mac hardware + os or windows pre-installed, chrome being pushed by google.com etc.

But you can’t and the reason that you care also has to do with the same production process.

It’s not like a separate company made the terminal app versus the model. If we think that the desktop app is bad, but the model is good then that’s still an endorsement of the software process.

If we think the model doesn’t matter at all, then that’s an even bigger endorsement. Is the model has no content worth talking about over the nearest competitors or an open source alternative, then the remainder is marketing and polish.

I just don’t understand how people can look at a company that is capacity constrained in this market and think that they’re doing things poorly.


You can go just as fast if you make good code, you just have to burn more tokens to do it. The tokens you burn in strict structure and documentation you’ll save in debugging as the codebase grows. I’m 5-30x my normal production depending on the day…with zero team and writing better code than I ever have, but you need a robust system to manage the path, and active supervision and management basically you’ll apply your senior dev skills as if you were managing 50 frisky interns.

People notice the jank, and it's affecting CC's reputatio That's not easy to come back from.

Unsure if this was AI generated, but doesn't pass close scrutiny:

"winner takes all of a massive market, first mover wins"

...this is the kind of AI spam that sounds convincing until you think about it.

It's not at all clear the foundation model or coding agent markets are winner takes all. Far more likely to be a handful of successful players based on the market so far.

First mover wins? OpenAI was first to market and looks in trouble.

There's something convincing about this kind of cliche that lets it slip past you until you start inspecting each claim.


The argument is that the bad code will result in the companies core product failing.

It just won't survive, it's that simple.

You get to know before others how the future will play out


Successful over what time frame? It's way too early to declare victory.

Is there some testable/observable prediction where Opencode beats CC and Nanoclaw beats Openclaw?

Wow this industry is so cooked. Good God almighty.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: