Hacker Newsnew | past | comments | ask | show | jobs | submit | SOLAR_FIELDS's commentslogin

I am not sure how workload specific it is, but in cloud compute in organizations I've worked there's often been a substantial savings outlay from literally just switching from x86 machines for workloads to ARM machines, with no other changes. So it's usually twofold, and a combination of both (lower price for the instance, but also better efficiency as well). In one organization in particular of recent memory we were doing dynamic autoscaling of hundreds of kubernetes nodes simultaneously and were able to project / achieve about 15% conservatively. Just from going x86 -> ARM with no additional changes. Probably some workload that is CPU bound but does not depend on x86 architecturally would benefit from a number significantly higher than that 15%

If you’ve ever had to be part of the frankly batshit insane procurement process that some organizations force you to gauntlet through, it becomes a very obvious and appealing option to do this

Ah, the AWS Marketplace procurement model, where products mostly exist so that you can line item things through Amazon rather than going through a lengthy procurement process

Not surprised to see this is common. At my company basically everyone and their mother are using Claude Code via Bedrock, despite us having company-wide Windsurf, Copilot and ChatGPT Enterprise accounts

That sounds different, the parent is saying they're using that because then no new billing and stuff has to be negotiated/setup, but in your case everything is already setup and people have access, they just chose to use something else?

The US Federal Government specifically calls out Monero as one of the coins that it hates, which means that it must be quite effective at achieving its goals. So the pro is that you know it works. The cons are nothing specific to Monero, just general criticism of cryptocurrencies. Not being a deep crypto user myself, at least, I haven't heard anyone speak of any flaw specific to Monero that isn't shared by a significant portion of the remainder of all of these coins

One admittedly minor issue is that since monero is ASIC resistant, its very attractive to run on compromised computers. Other coins suffer this as well I'm sure, but I don't have any data on this. I guess this is mainly a hunch.

> The US Federal Government specifically calls out Monero as one of the coins that it hates, which means that it must be quite effective at achieving its goals. So the pro is that you know it works.

Official condemnation doesn't work like that. Facebook's cryptocurrency, Libra, was also condemned, and we know it didn't work because it never actually existed.


Every time I hear about this megamerge and stacked pr nonsense, it just smells to me. Like, why does your engineering organization have a culture where this sort of nonsense is required in the first place? Anytime I see articles like this gushing about how great tool XYZ is for stack merging and things like that, all I hear is "you don't have a culture where you can get someone looking at and mainlining your PR on the same day"

The jj lovers can go build their massive beautiful branches off in a corner, I'll be over here building an SDLC that doesn't require that.

Old man yells at cloud moment is over


Not all software is developed by one software organization.

Programs to manage “stacks of patches” go back decades. That might be hundreds that have accumulated over years which are all rebased on the upstream repository. The upstream repository might be someone you barely know, or someone you haven’t managed to get a response from. But you have your changes in your fork and you need to maintain it yourself until upstream accepts it (if they ever call back).

I’m pretty sure that the Git For Windows project is managed as patches on top of Git. And I’ve seen the maintainer post patches to the Git mailing list saying something like, okay we’ve been using this for months now and I think it’s time that it is incorporated in Git.[1]

I’ve seen patches posted to the Git mailing list where they talk about how this new thing (like a command) was originally developed by someone on GitHub (say) but now someone on GitLab (say) took it over and wants to upstream it. Maybe years after it was started.

Almost all changes to the Git project need to incubate for a week in an integration branch called `next` before it is merged to `master`.[1] Beyond slow testing for Git project itself, this means that downstream projects can use `next` in their automated testing to catch regressions before they hit `master`.

† 1: Which is kind of a like a “megamerge”


Makes total sense! But what you described is like less than 5% of the use case here. Right tool for the right job and all that, what doesn't make sense is having this insanity in a "normal" software engineering setup where a single company owns and maintains the codebase, which is the vast majority of use cases.

> incorporated in Git.[1]

Dangling footnote. I decided against adding one and forgot to remove it.


It depends. We have pretty good review culture (usually same day rarely more than 24H), but some changes may need multiple rounds of review or might be have flaky tests that uncovers after a few hours. Also some work is experimental and not ready for pushing out for review. Sometimes I create a very large number of commits as part of a migration DND I can't get them all reviewed in parallel. It can be a lot of things. Maybe it happens more with monorepos.

All fair points, indeed I face each of the challenges you listed periodically myself. But it's never been often enough to feel like I need to seek out an entirely different toolchain and approach to manage them.

Well, fortunately Jujutsu isn’t an entirely different toolchain and/or approach. It’s one tool that’s git-compatible and is quite similar to it. But where it’s different, it’s (for me) better.

Yeah, I've never used Jujutsu, but from what I've seen so far everything it does can be done with Git itself, just perhaps in a (sometimes significantly) less convenient way.

Sure, true, I would say "often significantly" though, to the extent that you would never bother doing half the things with git that you can do with Jujutsu because it's such a pain.

I'd really like to hear your argument about when single large PR is better than stacked PRs from both PR author and reviewers' perspectives

Why frame this as either/or? Those aren't the only two options.

There are different types of "large" PR's. If I'm doing a 10,000 LOC refactor that's changing a method signature, that's a "large" PR, but who cares? It's the same thing being done over and over, I get a gist of the approach, do some sampling and sanity checks, check sensitive areas, and done.

If I'm doing something more complex and storied to the point it requires stacks with dependencies, then I'm questioning why I haven't split and chunked the thing into smaller PR's in the first place and having those reviewed. Ultimately the code still has to get reviewed, so often it's about reframing the mindset more than anything else. If it organizationally slows me down to the point that chunking the PR into smaller components is worse than a stacked-pr like approach, I'm not questioning the PR structure, I'm questioning why I'm being slowed down organizationally. Are my reviews not picked up fast enough? Is the automated testing situation not good enough? The answer always seems to come back to the process and not the tooling in these scenarios.

What problem does the stacked PR solve? It's so I can continue working downstream while someone else reviews my unmainlined upstream code that it depends on. If my upstream code gets mainlined at a reasonable rate, why is this even a problem to be solved? It also implies that you're only managing 1-3 major workstreams if you're getting blocked on the feature downstream which also begs the question, why am I waterfalling all of my work like this?

Fundamentally, I still have to manage the dependency issue with upstream PR's, even when I'm using stacked PR's. Let's say that an upstream reviewer in my stacked PR chain needs me to change something significant - a fairly normal operation in the course of review. I still have to walk down that chain and update my code accordingly. Having tools to slightly make that easier is nice, but the cost benefit of being on a different opt in toolchain that requires its own learning curve is questionable.


> If I'm doing something more complex and storied to the point it requires stacks with dependencies, then I'm questioning why I haven't split and chunked the thing into smaller PR's in the first place and having those reviewed.

It looks like you see stack PR as an inherent complex construct, but IMO splitting the implementation into smaller, more digestable and self-contained PRs is what stack PR is about

So if you agree that is a better engineering practice, then jj is only a tool that helps you do that without thinking too much about the tool itself


Why would you like git and not jj is beyond me, this must be something like two electric charges being the same and repelling themselves. It’s the same underlying data structure with a bit different axioms (conflicts allowed to be committed vs not; working tree is a commit vs isn’t).

Turns out these two differences combined with tracking change identity over multiple snapshots (git shas) allow for ergonomic workflows which were possible in git, just very cumbersome. The workflows that git makes easy jj also keeps easy. You can stop yelling at clouds and sleep soundly knowing that there is a tool to reach for when you need it and you’ll know when you need it.


> you don't have a culture where

Yeah, and? Not everyone is in control of the culture of the organization they work in. I suspect most people are not. Is everyone on HN CEOs and CTOs?


No, but there are a lot of them, and principal and staff engineers, and solo folks who would get to set the culture if they ever succeed.

A lot of people's taste making comes from reading the online discussions of the engineering literati so I think we need old folks yelling at clouds to keep us grounded.


Temporarily embarrassed CEOs and CTOs

I think the unspoken part is that the mess of commits is being produced by agents not people.

That’s why it’s always the same confusing hype when it’s discussed, because it’s AI/LLM hype effectively


I was in this situation long before llms came along. They may have exacerbated it a bit, but they are not the root cause.

Do you ask a bridge engineer if they forgot to reinforce the supports when they built the bridge? Even when I didn't know about security this was a table stakes thing. People saving sensitive things in plaintext are upset that their poor practices came back to bite them. Now, at the risk of sounding like I'm victim blaming here, Vercel is also totally bearing some responsibility for this insanity. But come on. FAFO and all that.

Wouldn't it be great if Aurora Serverless V2 actually supported this copy-on-write semantic? I would immediately be able to throw out a pile of slow code if they did.

Can get around this with a local STT model and use text input but UX is probably clunkier

Every time I see a statement like this I wonder what specific features of git that people feel like are terrible enough that it’s time to completely start over. Besides “the UX is kinda shit and it’s confusing to learn”, which there are many solutions for already that don’t involve reinventing a pretty good wheel.

Coming from mercurial (which is older than git), git doesn't understand a branch. Instead of a branch you get a tag that moves, which is very different. Too often I'm trying to figure out where something came in, and but there is just a series of commits with no information of which commits are related. Git then developed the squash+rebase workflow which softof gets around this, but it makes commit longer (bad), and loses the real history of what happened.

Git was not the first DVCS, there were better ones even when it was made. But Linus pushed git and people followed like sheep.

(I'm using git, both because everyone else is, and also because github exists - turns out nobody even wants a DVCS, they want a central version control system with the warts of SVN fixed).


> Coming from mercurial (which is older than git)

Git is older than mercurial by 12 days. Bazaar has git beat by about the same amount of time. The major DVCSes all came out within a month of each other.

> But Linus pushed git and people followed like sheep.

I don't think this is true. Until around 2010-2011 or so, projects moving to DVCS seemed to pick up not git but mercurial. The main impetus I think was not Linux choosing git but the collapse of alternate code hosting places other than GitHub, which essentially forced git.


way way back in the day I did some digging into all three - and picked bazaar for my personal projects. that didn't last long lol

the lack of a proper branch history is also the main pain point for me. but i disagree that noone wants a DCVS. having a full copy of the history locally, and being able to clone from any repo to anywhere else and even merge repos (without merging branches) is a major win for me.

Git is basically fine even though the verbs are backwards - e.g. you shouldn't need to name branches, commits should be far more automatic, but the basic mechanisms are fine.

GitHub is an abomination.


You might already be aware, but jj fixes exactly those complaints you have with git

Right.

How we got git was cvs was totally terrible[1], so Linus refused to use it. Larry McEvoy persuaded Linus to use Bitkeeper for the Linux kernel development effort. After trying Bitkeeper for a while, Linus did the thing of writing v0 of git in a weekend in a response to what he saw as the shortcomings of Bitkeeper for his workflow.[2]

But the point is there had already been vcs that saw wide adoption, serious attempts to address shortcomings in those (perforce and bitkeeper in particular) and then git was created to address specific shortcomings in those systems.

It wasn't born out of just a general "I wish there was something easier than rebase" whine or a desire to create the next thing. I haven't seen anything that comes close to being compelling in that respect. jj comes into that bucket for me. It looks "fine". Like if I was forced to use it I wouldn't complain. It doesn't look materially better than git in any way whatsoever though, and articles like this which say "it has no index" make me respond with "Like ok whatever bro". It really makes no practical difference to me whether the VCS has an index.

[1] I speak as someone who maintained a CVS repo with nearly 700 active developers and >20mm lines of code. When someone made a mistake and you had to go in and edit the repo files in binary format it was genuinely terrifying.

[2] In a cave. From a box of scraps. You get the idea.


To be fair the "shortcomings" that spurred it on mainly were the Samba guys (or just one) reverse-engineering Bitkeeper causing the kernel free license getting pulled, which caused Linus to say "I can build my own with blackjack and pre-commit hooks" and then he did, addressing it toward his exact use case.

It gained tons of popularity mainly because of Linus being behind it; similar projects already existed when it was released.


Mercurial was there, was better and more complete.

Too sad it didnt win the VCS wars.


When I tried both at that time hg was just really slow so I just adopted git for all my personal projects because it was fast and a lot better than cvs. I imagine others were the same.

I went with bzr mainly because it had an easy way to plugin "revision" into my documents in a way I could understand and monotonously increment.

hg was slow though I don't know how bzr compared as I was using it pretty light-weight.


Mercurial and Git started around the same time. Linus worried BitMover could threaten Mercurial developers because Mercurial and BitKeeper were more similar.

If git would change two defaults, that would make me really happy:

  1. git merge ONLY does merges (no fast forward/rebase). git pull ONLY does a fast forward
  2. git log by default is git log --first-parent. Just show commits where the parent is the current branch. This makes the merge workflow really easy to understand an linear, because in the end, you only care about commits on the trunk.

Most have pointed out that the OneDrive exclusion makes sense due to its complexity. But I see no one here defending the undocumented .git exclusion. That’s pretty egregious - if I’m backing up that directory it’s always 100% intentional and it definitely feels like a sacrifice to the product functionality for stability and performance. Not documenting it just twists the knife.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: