Yes, and to add, in case it's not obvious: in my experience the maintenance, mental (and emotional costs, call me sensitive) cost of bad code compounds exponentially the more hacks you throw at it
Now with AI, you're not only dealing with maintenance and mental overhead, but also the overhead of the Anthropic subscription (or whatever AI company) to deal with this spaghetti. Some may decide that's an okay tradeoff, but personally it seems insane to delegate a majority of development work to a blackbox, cloud-hosted LLM that can be rug pulled from underneath of you at any moment (and you're unable to hold it accountable if it screws up)
Call me naive, but I don't believe that I'm going to wake up tomorrow and ChatGPT.com and Claude.ai are going to be hard down and never come back. Same as Gmail, which is an entirely different corporation. I mean, they could, but it doesn't seem insane to use Gmail for my email, and that's way more important to my life functioning than this new AI thing.
I'm pretty sure that will be true with AI as well.
No accounting for taste, but part of makes code hard for me to reason about is when it has lots of combinatorial complexity, where the amount of states that can happen makes it difficult to know all the possible good and bad states that your program can be in. Combinatorial complexity is something that objectively can be expensive for any form of computer, be it a human brain or silicon. If the code is written in such a way that the number of correct and incorrect states are impossible to know, then the problem becomes undecidable.
I do think there is code that is "objectively" difficult to work with.
There are a number of things that make code hard to reason about for humans, and combinatorial complexity is just one of them. Another one is, say, size of working memory, or having to navigate across a large number of files to understand a piece of logic. These two examples are not necessarily expensive for computers.
I don't entirely disagree that there is code that's objectively difficult to work with, but I suspect that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has much less overlap than you're suggesting.
Certainly with current models I have found that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has actually been remarkably similar, I suspect because it's trained on a lot of terrible code on Github.
I'm sure that these models will get better, and I agree that the overlap will be lower at that point, but I still think what I said will be true.
I wouldn't expect so. These machines have been trained on natural language, after all. They see the world through an anthropomorphic lens. IME & from what I've heard, they struggle with inexpressive code in much the same way humans do.
What do you think about the argument that we are entering a world where code is so cheap to write, you can throw the old one away and build a new one after you've validated the business model, found a niche, whatever?
I mean, it seems like that has always been true to an extent, but now it may be even more true? Once you know you're sitting on a lode of gold, it's a lot easier to know how much to invest in the mine.
It hasn't always been true, it started with rapid development tools in the late 90's I believe.
And some people thought they were building "disposable" code, only to see their hacks being used for decades. I'm thinking about VB but also behemoth Excel files.
I guess the question is, are the issues not worth fixing because implementing a fix is extremely expensive, or because the improvements from fixing it were anticipated to be minor? I assume the answer is generally a mix of the two.
Someone has to figure out how to make the experiences of the two generations consistent in the ways it needs to be and differ only in the ways it doesn't still.
The tl;dr of this is that I don't think that the code itself is what needs to be preserved, the prompt and chat is the actual important and useful thing here. At some point I think it makes more sense to fine tune the prompts to get increasingly more specific and just regenerate the the code based on that spec, and store that in Git.
> At some point I think it makes more sense to fine tune the prompts to get increasingly more specific and just regenerate the the code based on that spec, and store that in Git.
Generating code using a non-deterministic code generator is a bold strategy. Just gotta hope that your next pull of the code slot machine doesn’t introduce a bug or ten.
We're already merging code that has generated bugs from the slot machine. People aren't actually reading through 10,000 line pull requests most of the time, and people aren't really reviewing every line of code.
Given that, we should instead tune the prompts well enough to not leave things to chance. Write automated tests to make sure that inputs and outputs are ok, write your specs so specifically that there's no room for ambiguity. Test these things multiple times locally to make sure you're getting consistent results.
> Write automated tests to make sure that inputs and outputs are ok
Write them by hand or generate them and check them in? You can’t escape the non-determinism inherent in LLMs. Eventually something has to be locked in place, be it the application code or the test code. So you can’t just have the LLM generate tests from a spec dynamically either.
> write your specs so specifically that there's no room for ambiguity
Using English prose, well known for its lack of ambiguity. Even extremely detailed RFCs have historically left lots of room for debate about meaning and intention. That’s the problem with not using actual code to “encode” how the system functions.
I get where you’re coming from but I think it’s a flawed idea. Less flawed than checking in vibe-coded feature changes, but still flawed.
> Write them by hand or generate them and check them in?
Yes, written by hand. I think that ultimately you should know what valid inputs and outputs are and as such the tests should be written by a human in accordance with the spec.
> Less flawed than checking in vibe-coded feature changes, but still flawed.
This is what I'm trying to get at. I agree it's not perfect, but I'm arguing it's less evil than what is currently happening.
Observability into how a foundation model generated product arrived to that state is significantly more important than the underlying codebase, as it's the prompt context that is the architecture.
Yeah, I'm just a little tired of seeing these pull requests of multi-thousand-line pull requests where no one has actually looked at the code.
The solution people are coming up with now is using AI for code reviews and I have to ask "why involve Git at all then?". If AI is writing the code, testing the code, reviewing the code, and merging the code, then it seems to me that we can just remove these steps and simply PR the prompts themselves.
You don't actually need source control to be able to roll back to any particular version that was in use. A series of tarballs will let you do that.
The entire purpose of source control is to let you reason about change sets to help you make decisions about the direction that development (including bug fixes) will take.
If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners, or are they still using it because they don't want to admit to themselves that they've completely lost control?
> are they still using it because they don't want to admit to themselves that they've completely lost control?
I think this is the case, or at least close.
I think a lot of people are still convincing themselves that they are the ones "writing" it because they're the ones putting their names on the pull request.
It reminds me of a lot of early Java, where it would make you feel like you were being very productive because everything that would take you eight lines in any other language would take thirty lines across three files to do in Java. Even though you didn't really "do" anything (and indeed Netbeans or IntelliJ or Eclipse was likely generating a lot of that bootstrapping code anyway), people would act like they were doing a lot of work because of a high number of lines of code.
Java is considerably less terrible now, to a point where I actually sort of begrudgingly like writing it, but early Java (IMO before Java 21 and especially before 11) was very bad about unnecessary verbosity.
> If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners,
does it have to be free to be useful? the CD part is is even more important than before, and if they still use git as their input, and everyone including the LLM is already familiar with git, whats the need to get rid of it?
there's value in git as a tool everyone knows the basics of, and as a common interface of communicating code to different systems.
passing tarballs around requires defining a bunch of new interfaces for those tarballs which adds a cost to every integration that you'd otherwise get for about free if you used git
A series of tarballs is really unwieldy for that though. Even if you don't want to use git, and even if the LLM is doing everything, having discrete pieces like "added GitHub oauth to login" and "added profile picture to account page" as different commits is still valuable for when you have to ask the LLM "hey about the profile picture on the account page".
Also, the approach you described is what a number of AI for Code Review products are using under-the-hood, but human-in-the-loop is still recognized as critical.
It's the same way how written design docs and comments are significantly more valuable than uncommented and undocumented source.
Because LLMs are designed as emulators of actual human reasoning, it wouldn't surprise me if we discover that the things that make software easy for humans to reason about also make it easier for LLMs to reason about.
It’s also possible to sell chairs that are uncomfortable and food that tastes terrible. Yet somehow we still have carpenters and chefs; Herman Miller and The French Laundry.
Some business models will require “good” code, and some won’t. That’s how it is right now as well. But pretending that all business models will no longer require “good” code is like pretending that Michelin should’ve retired its list after the microwave was invented.
Those high end restaurants are more like art and exploration of food then something practical like code. The only similarity is maybe research in academia. There's not real industry uses of code that's like art.
I used the extreme of the spectrum, I can’t imagine you’re arguing that food is binary good / bad? There’s a litany of food options and quality, matching different business models of convenience and experience.
Research in academia seems less appropriate because that’s famously not really a business model, except maybe in the extractive sense
There's no equivalent of experience and art in code. Writing code is not expressing your self, and you don't pay for pushing the limits and experimenting with it. That's what high end restaurants are along with service they provide.
As far as good or bad, how food is made is irreverent to the outcome if it's enjoyable.
Still, talk about "good" code exist for a reason. When the code is really bad, you end up paying the price by having to spend too more and more time and develop new features, with greater risk to introduce bugs. I've seen that in companies in the past, where bad code meant less stability and more time to ship features that we needed to retain customers or get new ones.
Now whether this is still true with AI, or if vibe coding means bad code no longer have this long term stability and velocity cost because AI are better than humans at working with this bad code... We don't know yet.
Not only true but I would guess it's the normal case. Most software is a huge pile of tech debt held together by zip-ties. Even greenfield projects quickly trend this way, as "just make it work" pressure overrides any posturing about a clean codebase.
A cornerstone of this community is "if you're not embarrassed by the first release you've waited too long", which is a recognition that imperfect code is not needed to create a successful business. That's why ShowHN exists.
It depends on the urgency. Not every product is urgent. CC arguable was very urgent; even a day of delay meant the competitors could come out with something slightly more appealing.
Most of their products are so large that you can easily find parts with very bad and parts with excellent code. I am not sure a whole ERP product could work with all very bad code, though.
I've seen people compare the situation we are in now with AI to early days of Uber. Basically "You're excitement is artificially inflated by the fact that a VC just paid half your bill."
That definitely happened with Uber, but I would argue that one key difference between the Uber situation and the AI situation is COST. How much can COGs be reduced via optimization and technology.
In Uber scenario, the cost is labor, there's a hard lower limit where people will find something else to do for work.
In AI scenario, we've already seen the labs make major reductions in cost-per-token. I think it's fairly uncontroversial to say they have more possible cost reduction levers than Uber.
So I don't agree that at some point VC money will run dry and the unit economics for tokens will dramatically change.
The part where this will be fun is where the VCs lobby the US government, via peace board donation or golden iphones, to ban Chinese Open Source AI because otherwise they won't ever make their money back.
I'd think so. Stored procedures let you do multi-statement sequences in fewer round trips. In 2026 larger systems are as likely as ever to run PostgreSQL on a different machine (or machines) than the application server. While latency between the two generally goes down over time, it's still not nothing. You may care about the latency of individual operations or the throughput impact of latency while holding a lock (see Amdahl's law).
Of course, the reasons not to use stored procedures still apply. They're logic, but they're versioned with the database schema, not with your application, which can be a pain.
* Good database drivers will let you pipeline multiple queries concurrently (esp. in languages with async support), effectively eliminating the _N_x roundtrip cost (you can even execute them in parallel if you use multiple connections, not that I recommend doing that). But obviously this is only doable where the queries are independent of one another; I use this mainly to perform query splitting efficiently if the join key is already known.
* These days databases are often effectively versioned alongside the code anyway, at least for either smaller projects that "own" the database, eliminating the biggest issue with stored procedures.
Brings back amazing memories from when I was in high school. A friend invited me to come to the museum for some drawing thing, turned out I got to participate in making a Sol Lewitt drawing. Had no idea who he was at the time but it was such a unique experience.
We were in an empty gallery room with a (I believe) BLACK painted wall with a faint grid on it and we were given an old hat with a bunch of cards in it. There were 4 or 5 highschool kids and we each took turns drawing a card out of the hat and drawing the shape from the card on consecutive grid squares on the wall. The shapes were basic lines and semi-circles. It was lots of fun and the end effect was a very beautiful line drawing mural.
This got me thinking: Rewind 25 years, I can easily imagine 15 year-old me sinking DOZENS of hours into playing this "game". I remember I put much more time than that into a free game that came in a box of cereal[0].
Today, I loaded the site up and spend about 30 seconds on it before deciding "this is cool!" and moving on, probably never to return.
What changed? I guess it's a mix of: (A) How I value my time. (B) The bar for "what pulls me in" in terms of gaming. (C) Some other factor around me just having already burned enough hours on games.
I'm not really sure how much each factor contributes.
Opportunity cost and perspective. We've probably played enough games to know how the cycle goes; there's a little voice in our heads now telling us that it's all just a big pixel hunt and the next few hours will be more of the same (my interest in a game fades once I learn the meta). And then there's so many games these days... so the other question is why not play something more interesting or exciting?
I think that's it, when it's new you explore, but when you know what to expect or seen it before, exploration is no longer interesting.
That said, there's some games out there today that draw me in just as much as others did 25 years ago; I've spent hundreds of hours in Factorio, I can't imagine how much I'd be into it 25 years ago (...assuming I would have understood it back then). Likewise, I'm sure I'd be a lot more into Minecraft if I was 25 years younger.
I used to believe this about myself as well, but later realized it was a rationalization. The reality is it's because leaving hacker news for extended periods (more than a minute or two) results in dopamine withdrawal. I feel a powerful urge to return to browsing links and my brain makes up a reason along the lines of "you're wasting time by staying on this site instead of going back to hacker news." It's a similar thing that drives me to "skip ahead to the good part" in youtube videos rather than watching the whole thing, evidenced by my doing it even on videos that are very short.
Weirdly, playing games is typically something I the feel least guilty doing, precisely because it's a distraction from the other stuff I'd otherwise not be doing. There's just a lot of stuff I want to do, that I struggle to do, and so I feel guilty about not making progress on that stuff. Then, whenever I try to do something else, I feel too guilty to do that something else.
It's a real self-reinforcing negative feedback loop. I agree that it's not healthy. It's just hard to break out of.
I deal with the exact same mental model. I think for me, while actively gaming I do have fun. It’s only after the fact I look back on the time wasted gaming and think “wow, I really should have worked on that project I want to build instead of playing a game”. It’s also hard to rationalize time spent gaming when you have nothing to show for it afterwards.
If you ever figure out the solution to this negative thought-loop, let me know please!
I dunno. I see many “grown ups” replacing video game time with just more time scrolling on their phones, or maybe on the TV watching YouTube or some streaming service.
I think playing (some) video games can be a bit better for your brain vs. the above alternatives. At least many of them require thought and/or coordination.
Again, there are exceptions, where they’re not much better than doom scrolling. But it’s not hard to find some that require some effort and thought.
Same but with 1 kid and different websites (including HN, which is equally bad!). Actively fighting it though. Slowly removing all social media accounts, now just need to figure out how to block stuff permanently on my phone.
On a desktop I did it with changing my hosts file to point everything to 127.0.0.1. Need to figure out how to do this also on mobile without an additional network device that would disrupt things for my wife.
I think it's in large part just having to do with us developing our frontal cortex and like impulse control. I would have probably gotten dopamine addicted to it 15 years ago, as well as wouldn't have some nagging back-of-mind thoughts about having to use my time to be converted into money to survive at that age.
I miss the days when I'd click every link and follow every rabbit hole. 100% completionism of collection games. It's shaped how my life has turned out, for better and worse.
FT uses "underwater" because the deal was $300 Billion and the stock has lost $315 Billion in market cap since the deal. That's a bit of a stretch, but the rest of the article is very good.
And as it won't be obvious to everyone here: Alphaville is one of the few free parts of FT online. You need to create an account to access it, but don't need a paid subscription.
Yea these market cap discussions are always a bit meaningless actually, stocks can be volatile for many reasons… its not like they actually lost the delta
which has always been true