I don't know how to tell you this, but people have been writing custom software for personal use for decades. I've been doing it since at least 2009! I find it hard to believe that there is a demographic of people that were yearning to write code, but simply could not because they lacked LLMs. Is it the price? Are people simply too cheap to buy books? Or have they simply "forgotten" how to patiently and thoughtfully read them? Or has the quality of tutorials/documentation of languages/libraries/framework online decayed in the last decade? Or is it really that people have struggled to type characters of code into their text editors[1]?
Basically, I am prepared to accept that there is a friction that LLMs lubricate away, but what is the source of the friction, and why am I (and a bunch of other colleagues) not feeling that friction daily in our practice?
[1]: And if so, where did we programmers and computer scientists go wrong? Were subroutines and macros not sufficient for automating all of that excess typing? Were Emacs and Vim simply not saving enough keystrokes? Did people forget how to touch-type?
> Basically, I am prepared to accept that there is a friction that LLMs lubricate away, but what is the source of the friction, and why am I (and a bunch of other colleagues) not feeling that friction daily in our practice?
You must be extremely talented and fast if LLMs make no difference for you.
For people like me though, it's another story: I've been doing this professionally for 25 years and of course, like many, I have been writing custom software for my own use all this time, on personal time. But with LLMs I get better results, faster and with very little effort. And that is the difference between another item in my list of unfinished software that consumed too much of my weekends and a cool utility/toy/useful thing I got after a few fun and interesting chat sessions.
> I find it hard to believe that there is a demographic of people that were yearning to write code, but simply could not because they lacked LLMs.
I still vaguely remember how difficult man pages were to understand when I first started reading them. I'm pretty sure the biggest obstacle is the fact that most documentation is written for people who already know the standard computer science terminology. I have a generally negative opinion of LLMs, but one thing they do very well is function as a "reverse dictionary". You can input a idiosyncratic description of something you want and get the standard terminology. This is a new and valuable capability.
There is a universe out there, where most of the world is reading Solaris man pages, instead of Linux man pages. Whatever your thoughts on the Solaris OS, I think it is fair to say that no operating system has ever matched the quality of its man pages.
Interestingly, I also converged on the "reverse dictionary" usage of LLMs, in around 2024[1], mostly to indulge in (human) language-learning.
An excerpt from the post below:
```
It is a phenomenal reverse dictionary (i.e. which English words mean "of a specific but unspecified character, quality, or degree"). It not only works for English, but also for Esperanto (i.e. which Esperanto words mean "of a specific but unspecified character, quality, or degree"), as well as my own obscure native language. This is a huge time-saver when learning languages (normal dictionaries won't cut it, and bi-lingual dictionaries are limited, if they are available at all). Even if you are just using a language you are fluent in, a reverse-dictionary-prompt can help you find words and usages, and can also help you find "dark spots" in the language's lexicon.
```
I've commented on this subject before, but the fact of the matter is that kids getting into high tech and programming mostly don't read books anymore. How do I know? Recently I was hanging out with a bunch of high school students who asked me how I learned. I said it was mostly via books and man pages. "Yeah, don't sleep on high quality written material. O'Reilly. Wiley. Addison-Wesley. Manning. MIT. No Starch Press. &c..."
Well. You should have seen the look on their faces. I might as well have morphed into the Steve Buscemi meme "How do you do, fellow kids?" They looked at me like I was a total relic or greybeard and said things like "Nah, nobody reads tech books anymore; I learned Typescript from YouTube videos."
Already in 2008, as a millennial teen without internet at home, I was learning C# and XNA without a single book, just tutorials and official docs I downloaded from the library alongside Visual Studio Express. I couldn't have afforded books on it anyway, but I can't imagine teens in 2026 using anything other than Youtube and some tutorials to learn this stuff.
I learned programming from tutorials :) Only after I kept encountering terms in tutorials (long after I was building (badly organized) programs) that I didn't understand well did I decide to read my first book, K&R's C. This was when animated gifs were a novelty not worth the data transfer time.
I think every generation feels like their way of learning was the best, but we all make it work. There was a time when the architects of systems directly tutored programmers on how to write programs.
> most documentation is written for people who already know the standard computer science terminology
Not really. It's probably complexity for the sake of it in some cases. Also it's frequently ambiguous, and I'm really not sure why: it looks like some developers lack the basic logic (?!).
> I find it hard to believe that there is a demographic of people that were yearning to write code, but simply could not because they lacked LLMs. Is it the price?
Yes, because the price is measured in time.
With LLM tooling I’ve churned out idiosyncratic tools that fit my use cases quickly. Takes maybe a day instead of a week. A week instead of months. The fast turnaround changes the economics of writing custom tools for myself.
Not speaking for the OP. But my biggest constraint is time. Now with agentic coding, I can work in 5 to 15 minute bursts a few times/day, and make meaningful progress on projects, where as before I would have never been able to context shift from my day job long enough on a personal project.
Yep! Time was the biggest factor. I could have created that one tool I had for years been wanting to make, but tech moves fast, and I have a job and a family and a passion for music and yadda yadda yadda. AI has been a game changer for actually accomplishing big dreams I just didn't have the time to bring about to fruition.
Well, I’ve been writing code for decades so I know because there was a time ( when I was younger ) where I did just this.
I also know that these days, for all kinds of reasons, I do not have the time to write the tools I’m writing now without AI. I don’t lack the ability, and I could - it will simply be multi months side projects that I can’t / won’t complete.
I work 8-10 hours a day and outside those working hours I want to spend time with my family, my friends, and my hobbies.
At the same time, during those 8-10 working hours I don't want to spend time fiddling around with different programming languages or software patterns just to spit out a quirky little tool that would make my job a bit easier.
For example, I wanted a local to-do list software that I could easily integrate with my workflow. Spent some time trying to find one, but not a single one worked the way I wanted. So, one morning, I spent 5 minutes detailing what I wanted, prompted it to Claude and let it rip while I was working. 30 minutes later, it was ready.
Given how often younger people find my typing speed startling, I think it has been somewhat forgotten (US high schools had "keyboarding" classes at one point but that seems to have fallen off...)
Seriously agree. I am wildly overeducated and I often think the most useful class I ever took in high school was my senior year elective for a typing class. On old IBM typewriters. And the only class I took in high school with non-honors kids. Typing insanely fast, especially for someone who is a fast thinker, is a bit of a magic power in itself.
Speaking for myself, it's less of a yearning to write more code, than it is a yearning for tools that work a specific way.
I write plenty of code at my job, and generally don't have the desire to write more code as a hobby, except in rare cases when the mood really strikes.
I have been writing my own custom software for myself for over 30 years. But in the last six months I have written a lot more of it because the language models make it so much faster and easier to do so.
If you are saying that what we had previously was actually as easy as literally writing "make me a web app for arranging seats at a wedding and put it on Vercel" then you are very divorced from reality.
I know how to do all of these things and even find them easy, but it's just much faster now. These are personal one task toy apps, but they are useful.
Yes, definitely, though I'm unsure what it means being cheap here.
Not everyone has SV incomes and infinite time to read all the books that would allow to buy, let alone integrate the lessons at a practical implementation level. Plus people might have other interest in life, and family and friends they want to dedicate time and warm attention to.
There's a whole lot of people who want software to do certain things but whos job isn't programming and life requirements don't allow the time for all the book reading, tutorial running, and practice to write useful code.
I'm a long time ops guy. I script, but I spend most of my time configuring, patch testing, and keeping the low level infra running much of which doesn't require "coding" per say. Infra as code is in the grand scheme relatively new and still not ubiquitous despite what silicon valley would have you believe. I never had a need to learn to code to a level to do many of the things I'd like to see happen and find useful. Now I can make those software desires a reality without having to alter my career, preferred hobbies, or much of anything else about my life.
> I don't know how to tell you this, but people have been writing custom software for personal use for decades. I've been doing it since at least 2009!
GP never claimed otherwise.
As for the rest of your comment, it's frankly a bit patronising: are people too cheap, are people too lazy to read, are people unable to type...?
No, people are busy, a fact which GP made abundantly clear in the very first paragraph.
> I would never have done this if it weren’t for AI - I simply don’t have the time otherwise.
But if people are so busy, when are they planning to use their suite of bespoke software anyway? Like isn't this all about recreation anyway? This blog post certainly seems to be that at least. Is this really all about spending money on AI to write something that you then are using just for job? Because, apparently, you have no time otherwise?
If its not for fun, what's it for? It doesn't really seem like anyone is making stuff they are going to use next month anyway? But, I totally get how its recreational, and can be fun in the "computer, make my program" kind of way.
That’s one question never answered. It’s way easier to write a vim/sublime/emacs plugin than a whole new brand editor. These days, I try to use single purpose programs that does one thing and compose them instead of trying to get the “one true” software.
I don't want to be too tsk-tsk here but please remember community standards here. Its not appropriate to assume bad faith and we should strive to be charitable in the comments section here [1]. Saying vim here is clearly in reference to article, where they have a whole section about it. To borrow some of that AI lingo, we are already sharing all the context here, why speak past me like this?
Further, the article does not mention "requirements," it mentions the "joy" of having software "fit" just you. It goes through I think a certain amount of care in the writing to say they are enabled by their system only insofar as there is a "satisfaction" to not dealing with something from without that is for a more general audience.
At the end of the day, life is what you spend time doing. I don't think the author or anybody really thinks cumulative time is saved one way or the other here. This is all a product of what we want to spend time doing. And I am just saying, that's recreational! It doesn't have to be the case that something is lesser if its not about maximizing productivity or making more money. Either you have a "decades-long" project configuring a system, or your spending a decade writing new software for you, that's a "quiet pleasure to use." It's clearly either way about the project of it. Do we really think anyone is going to vibe code a vim clone and, insofar as they use it, not continue to tinker with it? Isn't that like the whole upshot here? That you can make things forever?
A guy who uses i3/sway and rolls their own DE even before vibe coding world is already a particular kind of person with certain priorities and judgements about time! And that's cool! I am that kind of guy, fwiw.
A lot of people into the synthesizers and related stuff talk about so-called "gear acquisition syndrome," where, in the search for hardware that fits their "requirements" as serious musicians, the time (and money) they end up spending just getting new things ends up eclipsing time doing the actual thing (making music). Depending on how much money they have, this doesn't necessarily become bad, one just realizes they are maybe a synth collector more than a composer.
Even if I had all the AI money token blah blah in the world, I would still hesitate spending time rebuilding an IDE or editor on my weekends, because for me personally, that's time getting in the way of using the computer to make my things. Like I am hungry, I do not want to forge my own chef's knife first, but I do think the people that do have a kinda cool hobby! Or, if its about spending my weekend making an OS so that I can, come Monday, read work emails exactly how I want, well that just terrible to me but everyone has their own work-life balance I think.
Again, I am not trying to explain away or I guess be negative here. There are lots of kinds of people, that's ok! I think it's just interesting how we know traffic concepts of "time" and "productivity" and "serious computer work vs. recreational computer stuff" these days!
I have written multiple IRC bots in the last 20+ years. It's my go-to project to test a new language, mostly because I know the protocol inside and out and it has some gotchas that languages can't handle comfortably (managing a bunch of open TCP sockets with threads/subprocesses mostly).
Have I tried to write my own IRC client yet? Nope. Because even though I know how to, the time spent wouldn't have been worth it. Getting from zero to feature parity would've taken me weeks or months of evenings doing nothing else.
I've got my own irccloud/thelounge clone running now, took me two weeks of calendar time and I spent maybe 6-7 evenings on it and a few hare-brained ideas with Claude on my phone.
The amount of "lubrication" LLMs have given me in going from idea to something good enough just for me is completely bonkers.
> I find it hard to believe that there is a demographic of people that were yearning to write code, but simply could not because they lacked LLMs.
I am in that demographic. I have been hacking on other peoples' software as a necessity, to get it to work or to do things I wanted that it didn't yet do, all my career. LLMs came along and afforded me the opportunity to act like a full time programmer when I'm just a paranoid systems monkey who is normally obliged to treat programming as a barrier to be overcome, not a career or even primary hobby.
In my specific case, the reason I was yearning to write code but did not was simply because there weren't enough hours in the day, and I wasn't told that I should spend my on-the-clock hours doing it (unless it was for automating my job). So despite the fact that I have had hundreds of instructional hours of programming classes, learned the basics in half a dozen languages, and been "hacking" code for years, none of it stuck because I never had an employer say to me "right, you're going to be responsible for writing (or maintaining) this Perl app here..."
> Basically, I am prepared to accept that there is a friction that LLMs lubricate away, but what is the source of the friction, and why am I (and a bunch of other colleagues) not feeling that friction daily in our practice?
Learning a programming language and then not getting to use it more than a few days every 3 years means you don't actually learn the language. It's more like a pleasant evening playing a game.
You and your colleagues are, I presume, programmers. I'd wager what I just described is Greek to you. So try to imagine it this way: somebody comes out with a crazy new prototype CPU. It's got a radically different ISA, so it doesn't even have C on it yet -- you have to poke registers with a brand new language that's like Ada on mescaline and it's built on a flavor of assembly that's like nothing you've ever heard of. So your boss tells you to learn it, then 2 weeks later pulls you off the project to do something normal and takes the dev board away.
If you don't see that CPU again for 3 years, how long are you going to retain that bit of knowledge you acquired? Well, that's what it's like for us programmer-adjacent nerds who spend all our time building systems, replacing failed components, crimping cables, writing disaster recovery documents, adjusting backup schedules, and getting woken the hell up night after night because another filesystem filled up.
I have no data to share on how many of us there are out there in similar situations world-wide, but I have met numerous traditional sysadmin in my time who were competent at automating things with shell scripts, not competent at writing "software" in "real" programming languages, and are probably using LLMs now to remedy that lack of skill.
For every 10 DevOps guys there may be one trad sysadmin out there who knows enough Perl to glue the server farm together and keep it running but can't be bothered to learn Python. Or the ratio may be the opposite. But whichever it is, that demographic very much exists.
People (well, American people (disclosure, I am an American)), used to be scared/worried that Silicon Valley will eventually move to Bangalore or Shenzhen, because of wage-discrepancies, and so on -- and it is not a totally unreasonable concern, considering that the _Silicon_ part of Silicon Valley has been slowly relocated to Taipei, Seoul, Tokyo, and a few others. At this point, maybe we should start pushing that the _rest_ of Silicon Valley gets relocated somewhere else, too.
It's a breeding ground for Edisons and Morgans, not Teslas. It is profoundly depressing that SV is doing everything it can (knowingly or unknowingly, not sure which is worse) to get the entire planet to stop taking it seriously and to shun it.
If you have worked in Silicon Valley you know that Bangalore and Shenzhen came here ;)
In all seriousness, the silicon is still designed in Silicon Valley but maybe you don't hear about that as much? Broadcom, Qualcomm, Intel, Samsung, AMD, Nvidia, etc. all have a huge presence there still.
Just to emphasize my point, China is not being deprived of chip _designs_ (via export bans of ASML-made lithography equipment), but rather of the actual physical machines that rearrange the atoms.
I feel like LLMs[1] are going to cause a kind of "divorce" between those who love making software and those who love selling software. It was difficult for these two groups to communicate and coordinate before, and now it is _excruciating_. What little mutual tolerance and slack there was, is practically gone.
Open source was always[2] a fragile arrangement based on the kind of trust that involves looking at things through one's fingers (turning a blind eye may be more idiomatic in English), and we are at the point where you just have to either shut your eyes, or otherwise stop pretending that the situation can be salvaged at all.
Just a thought I had: some people think that LLM-shaming is declasse, and maybe it is, but I think that perhaps we _should_ LLM-shame, until the AI-companies train their LLMs to actually give attribution, if nothing else (I mean if it can memorize entire blocks of code, why can't it memorize where it saw that code? Would this not, potentially, _improve_ the attribution-situation, to levels better than even the pre-LLM era? Oh right, because plagiarism might actually be the product).
[1]: Not blaming the tech itself, but rather the people who choose to use it recklessly, and an industry that is based almost entirely on getting mega-corporations to buy startups that, against the odds, have acquired a decent number of happy-ish customers, that can now be relentlessly locked-in and up-sold to.
Just to add some perspective to this comparison: the US massacred four _million_ people in South East Asia, during the Vietnam war. That is 2/3rds of a holocaust. The Iraq War (second one), cost between half a million and a million lives (estimates vary, and it only includes violent deaths directly caused by American troops -- the war itself caused an increase in crime and murder and out-migration).
I could go on, but Tienanmen does not compare to most of the things the US has done outside of its own borders from 1946 to the present. And no, we (I am American) cannot justify a body count in the millions, just because our victims are communist/authoritarian/theocratic. Note also that we only number 5% of the world's population, and that if we compared body-counts as percentage of populations, instead of as absolute numbers, I doubt we even have enough people to settle that debt.
Even worse, if the world internalizes that it is fine to murder millions of foreigners, just because they are oddballs that their citizens cannot empathize with, the _we_ are going to have a big problem -- we appear much more odd to the world than the world does to us.
I am surprised that our shenanigans have been tolerated for nearly a century.
And the US massacred four _million_ people in South East Asia, during the Vietnam war. That is 2/3rds of a holocaust. The Iraq War (second one), cost between half a million and a million lives (estimates vary, and it only includes violent deaths directly caused by American troops -- the war itself caused an increase in crime and murder and out-migration).
I could go on, but Tienanmen does not compare to most of the things the US has done outside of its own borders from 1946 to the present. And no, we (I am American) cannot justify a body count in the millions, just because our victims are communist/authoritarian/theocratic. Note also that we only number 5% of the world's population, and that if we compared body-counts as percentage of populations, instead of as absolute numbers, I doubt we even have enough people to settle that debt.
Even worse, if the world internalizes that it is fine to murder millions of foreigners, just because they are oddballs that their citizens cannot empathize with, the _we_ are going to have a big problem -- we appear much more odd to the world than the world does to us.
I am surprised that our shenanigans have been tolerated for nearly a century.
Around the time of the pandemic, a company wanted to make some Javascript code do a kind of transformation over large number of web-pages (a billion or so, fetched as WARC files from the web archive). Their engineers suggested setting up SmartOS VMs and deploying Manta (which would have allowed the use of the Javascript code in a totally unmodified way -- map-reduce from the command-line, that scales with the number storage/processing nodes) which should have taken a few weeks at most.
After a bit of googling and meeting, the higher ups decided to use AWS Lambdas and Google Cloud Functions, because that's what everyone else was doing, and they figured that this was a sensible business move because the job-market must be full of people who know how to modify/maintain Lambda/GCF code.
Needless to say, Lambda/GCF were not built for this kind of workload, and they could not scale. In fact, the workload was so out-of-distribution, that the GCP folks moved the instances (if you can call them that) to a completely different data-center, because the workload was causing performance problems, for _other_ customers in the original data-center.
Once it became clear that this approach cannot scale to a billion or so web-pages, it was decided to -- no, not to deploy Manta or an equivalent -- but to build a custom "pipeline" from scratch, that would do this. This system was in development for 6 months or so, and never really worked correctly/reliably.
This is the kind of thing that happens when non-engineers can override or veto engineering decisions -- and the only reason they can do that, is because the non-engineers sign the paychecks (it does not matter how big the paycheck is, because market will find a way to extract all of it).
One of the fallacies of the tech-industry (I do not mean to paint with too broad a brush, there are obviously companies out there that know what they are doing) is that there are trade-offs to be made between business-decisions and engineering-decisions. I think this is more a kind of psychological distortion or a false-choice (forcing an engineering decision on the basis of what the job market will be like some day in the future -- during a pandemic no less -- is practically delusional). Also, if such trade-offs are true trade-offs, then maybe the company is not really an engineering company (which is fine, but that is kind of like a shoe-store having a few podiatrists on staff -- it is wasteful, but they can now walk around in white lab-coats, and pretend to be a healthcare institution instead of a shoe-store).
Personally, I believe that the tech industry sustains itself via technical debt, much like the real economy sustains itself on real debt. In some sense, everyone is trying to gaslight everyone else into incurring as much technical debt as possible, so that a way to service the debt can be sold. Most of the technical debt is not necessary, and if people were empowered to just not incur it, I suspect it would orient tech companies towards making things that actually push the state of the art forward.
There was a moment ca. 2020 when everyone was losing their minds over Lambda and other cloud services like SQS and S3 because they're "so cheap!!11". Innumeracy is a hell of a drug.
A lot of criticism of k8s is always centered about some imagined perfect PaaS, or related to being in very narrow goldilocks zone where the costs of "serverless" are easier to bear...
> Personally, I believe that the tech industry sustains itself via technical debt, much like the real economy sustains itself on real debt. In some sense, everyone is trying to gaslight everyone else into incurring as much technical debt as possible, so that a way to service the debt can be sold.
This feels like a reminder that everything "Cloud" is still basically the same as IBM's ancient business model. We've always just been renting time on someone else's computers, and those someone else people are always trying to rent more time. The landlords shift, but the game stays the same.
It seems that many high-quality things (or otherwise aspirational things) take on Esperanto names (disclosure, I am an Esperantist). While Monero is no doubt a cool crypto-currency, it is even cooler that it has inspired some crypto-curious people to learn Esperanto[1] instead!
While I am here, I might as well give you a brief Esperanto lesson. Mono = money, ero = piece/quantum. So, "pano" = bread, "panero" = bread-crumb. Thus, "monero" = coin.
Many previous international currencies (all of them created with Swiss involvement), were also given Esperanto names: Spesmilo (thousand speso's (speso is analogous to "penny")), Stelo (star).
There is even a luxury watch-brand (from Switzerland) called "Movado", which is Esperanto for "Movement" (made back when watches were made with mechanical movements).
And I also learned, from the linked thread (disclosure, I am a participant), that there is a soft-drink called "Mirinda". This is an adjective that means "awe-worthy".
Back in 2014, I did an analysis of (single threaded) CPU-efficiency and RAM-efficiency of various data-structures (skiplists, slablists, avl-trees, rb-trees, b-trees):
I used whatever I could find on the internet at the time, so the comparison compares both algorithm and implementation (they were all written in C, but even slight changes to the C code can change performance -- uuavl performs much better than all other avl variants, for example). I suspect that a differently-programmed skip-list would not have performed quite so poorly.
The general conclusion from all this, is that any data-structure that can organize itself _around_ page-sizes and cache-sizes, will perform very well compared to structures that cannot.
Basically, I am prepared to accept that there is a friction that LLMs lubricate away, but what is the source of the friction, and why am I (and a bunch of other colleagues) not feeling that friction daily in our practice?
[1]: And if so, where did we programmers and computer scientists go wrong? Were subroutines and macros not sufficient for automating all of that excess typing? Were Emacs and Vim simply not saving enough keystrokes? Did people forget how to touch-type?
reply