This seems like a total category error. The Railroads are the only example that actually seems comparable, in being an infrastructure build out that's mostly done by a variety of private companies. Examples of things that would be worth comparing to the datacenter boom are factory construction and utilities (electrification in the first half of the 20th century, running water, gas pipes.)
For some reason this reminds me of people at work who walk up and say we did x bazillion things in n time, and then pause and expect us to express shock at how amazing that is and how much more productive they are than other teams. So what. Without a proper comparison to something equivalent I can't evaluate whether it's exceptional. I could treat each molecule as a thing and tell people how incredibly many things I eat on average per minute, but if I explain no one would find this to be exceptional.
Fwiw, Railroads were the reason for some of the biggest bank collapses in history. Panic of 1873 was literally called "The Great Depression" (until a greater depression hit). 20 years later was the Panic of 1893. Both were due to over-investment and a bubble bursting, and they took out tons of banks and businesses.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
Have you seen video of a slime mold searching for food? It grows like crazy in a bunch of simultaneous search paths, expending tons of energy following a rough directional gradient looking for food. Once one of the branches finds the food all of the other search paths shrivel up and die off. I think slime molds are much better analogies for these situations than bubbles.
Lol a bit dramatic at the end. There will be a correction in stocks that were priced in for growth related to AI.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets
2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
> We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
75% report positive ROI (and the VPs are much more "optimistic" than the middle managers who are closer to the work) - but how much ROI? 1%? The fact that they don't quote a figure at all is pretty telling. And that's the ROI of the people buying the AI services, which are often heavily subsidized. If it costs a billion dollars to give a mid-sized company a 1% ROI, that doesn't sound sustainable.
I would love to see another report that isn't a year old with actual ROI figures...
Can't say why they don't report exact numbers, but it may be because a) of confidentiality and b) RoI is very context dependent and c) there is a wide spectrum of RoI by different dimensions, including some 9% even reporting negative RoI. This may make it hard to cite a single number, but the majority report "moderate" to "significant" RoI, whatever that means to them.
I'll add that I've seen mentions of similar reports from other sources like McKinsey and co. e.g. this one that claims actual revenue increase: https://www.mckinsey.com/featured-insights/week-in-charts/ge... -- I tend not to take these reports at face value, but I'm seeing multiple of them from various sources that tend to align.
As an aside, I just wanted to say, these are the kinds of discussions I was hoping to see here!
It’s not easy to quantify because you’re basically substituting or augmenting labor. How do you quantify an ROI on employees? You can look at profit of a project they’re hired to execute. But with AI, it’s mixed with the employees, so how do you distinguish the ROI of the two? With time, we might be able to make comparisons, but outside of very specific scenarios it’s difficult to quantify.
> The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
That's fair, it is survey based but it is apparently based on formal internal measurements. The full report (https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025... -- slides 43 onwards) mentions that for 75% of them have "integrated formal ROI measurement."
There is little discussion of what that means, however, but we really can't expect concrete numbers for what is going to be sensitive business data,and given that the report tracks it across multiple industries and functions ranging from IT to operations to legal to sales, it may be hard to put into sensible numbers, or how the measurements may be flawed or biased.
The other categorical error is that the American people paid the railroads a monumental subsidy to get the job done. We gave them almost 10% of the territory.
Given the size of some of these data centers, the incentives packages that local governments often give their developers, and the impact on the electric grid that can, in some cases, raise costs for other ratepayers, I'd say the comparison could be similar.
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
The problem is that once built, railroads provided economic value right off the bat.
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
Equating this buildout with LLMs is also a category error. Waymo (self-driving cars) depends on the same infrastructure, and there are a variety of other robotics programs which are actually functioning, you can see them in operation. They all require a lot of GPUs to train and run the models which operate the robotics.
The answers to both of those questions are pretty guarded trade secrets. Amazon and Google just to name a couple examples are very profitable companies and I would not bet on them investing all this money without real use cases where profit is likely. Amazon is adding thousands of new robots to their factories every year.
So your argument basically boils down to, the datacenter build out is not a waste of resources because if it was, these companies wouldn’t be building them.
I mean, your argument is that Google has had increasing revenue and profit for a decade, to the point that they have $400B in revenue + profit this year, and that they are going to lose money because they plan to spend $180B on capital projects for new data centers next year, because you know their business better than they do.
It's not clear that Waymo is an improvement over existing infrastructure so much as ensuring that fewer humans benefit from each car ride (which was already pathetically low).
Is Waymo a good example when Google has third world people sitting at a screen operating the vehicle on the other side of the world, how can it performance be trusted?
And it’s probably useless at the end of the day because everything will reduce down from a centralized location to your desktop/laptop/tablet/phone. OpenAI, Microsoft, Meta, Google, Oracle dreams of a centralized computing location will not hold up.
The problem with the age input box is that we don't have the GDPR. We're mandating that people give accurate age information to advertisers, and it's legal for advertisers to sell detailed dossiers on people including their age and target advertising using the age. This is why Meta wrote the age input box legislation, they want to make everyone legally required to provide Meta with their age.
If that were a uniform stance, maybe, but when it's used for partisan reasons by the party in power it's a different story. That's also not the law, the law is that anyone in the country has the right to free speech. If rights only apply to citizens it is a mockery of the freedoms this country is built on.
you expect them to recall some analogous example of politic deportations years ago?
and anyway, almost certainly the answer is yes; it is not hard to believe that a person's stance is that systematically deporting people for disagreeing with the government is wrong. "Trump bad" is very often on the basis of principles which trump is violating, not just because it's Trump. Surely you realize that people are mad at him a lot because of the thing he does?
Do you recall an instance where Obama attempted to revoke someone's visa for protesting? I don't believe that happened a single time.
I am generally against deportations for people who haven't committed any violent crime. I don't usually waste my time talking about it when law enforcement is enforcing the laws as written though. From what I saw, Obama was enforcing the law as written. I was often opposed to what he was doing, but I don't find much point in trying to get the president to do illegal things, even if I would prefer the law be different. In fact, if you look at Obama's actions there were quite a few times Obama chose not to deport people for reasons I generally supported, but the courts said it was illegal. So again, even as I might've disagreed, focusing on Obama would be missing the point that the law needs to change, which is something that needs to happen in Congress.
I find the current situation particularly egregious because immigration agents have not only deported legal residents who have committed no crimes nor have they violated any terms of their visas, but also executed American citizens who have committed no crime.
When did Obama create a masked gestapo that kills innocent civilians? Oh right. Never. Both siding with trump makes you look like a bot troll or worse.
I'd say the difference with the deportations under Obama (aside from deporting more people while spending less money doing it) is that he followed the law when doing so
As a person who spent a couple of hours watching our local ICE facility today, I'd say the differences are purely aesthetic.
I've gotten to where I don't really care -what- the law is and believe that from an ethical standpoint if a person can have a house and a job and not cause trouble I don't care if they are from Honduras or Houston- any position other than that is just racism with extra steps.
And I am aware that probably sounds crazy to most folks here but at this point I don't care. The folks I organize with have been working since before Trump and will likely be working still when the Democrats put whatever stuff suit their leadership selects.
I would have a hard time arguing that after seeing Alex Pretti's public execution. I also think we can at least partially agree on who should be targeted (emphasis my own):
> Carefully calibrated revisions to Department of Homeland Security (DHS) immigration enforcement priorities and practices [...] *[made] noncitizens with criminal records the top enforcement target* [0]
I consider there to be a gulf of difference between the murder of American citizens in-between detaining anyone caught speaking the wrong language, and Obama's DHS and immigration policy.
> any position other than that is just racism with extra steps
Here I'll politely disagree to agree; in the same way Uber and Lyft flooded the driver market and collapsed the price of a medallion, so to does open borders flood the market with workers, collapsing the worth of my labour.
You haven't been paying attention. And that's ok. Obama was destroying families, and killing peoples, he just did it out of sight with a charming smile.
You think people deported by him didn't die as a result? You think his massive expansion of drone violence didn't kill people living lives as rich and complex as Pretti's? You don't remember Obama deciding not to prosecute people for Abu Ghraib?
> Let me rephrase this, 17% of the most popular Rust packages contain code that virtually nobody knows what it does (I can't imagine about the long tail which receives less attention).
I dug into the linked article, and I would really say this means something closer to 17% of the most popular Rust package versions are either unbuildable or have some weird quirks that make building them not work the way you expect, and not in a remotely reproducible fashion.
Pulling things into the standard lib is fine if you think everyone should stop using packages entirely, but that doesn't seem like it really does anything to solve the actual problem. There are a number of things it seems like we might be forced to adopt across the board very soon, and for Rust it seems tractable, but I shudder to think about doing it for messier languages like Ruby, Python, Perl, etc.
* Reproducible builds seems like the first thing.
* This means you can't pull in git submodules or anything from the Internet during your build.
* Specifically for the issues in this post, we're going to need proactive security scanners. One thing I could imagine is if a company funnels all their packages through a proxy, you could have a service that goes and attempts to rebuild the package from source, and flags differences. This requires the builds to be remotely reproducible.
* Maybe the latest LLMs like Claude Mythos are smart enough that you don't need reproducible builds, and you can ask some LLM agent workflow to review the discrepancies between the repo and the actual package version.
> and I would really say this means something closer to 17% of the most popular Rust package versions are either unbuildable or have some weird quirks that make building them not work the way you expect
No, what it means is that the source in crates.io doesn't match 1:1 with any commit sha in their project's repo. This is usually because some gitignored file ended up as part of the distributed package, or poor release practice.
This doesn't mean that the project can't build, or that it is being exploited (but it is a signal to look closer).
Transformers operate on images and a variety of sensor data. They can also operate completely on non-textual inputs and outputs. I don't know what the ceiling on their capabilities is, but the complaint that they only operate on text seems just obviously wrong. There are numerous examples but one is meteorological forecasting which takes in a variety of time series sensor inputs and outputs e.g. time-series temperature maps. https://www.nature.com/articles/s41598-025-07897-4
> On Saturday, April 4, at 5:06 AM, I received a notification saying my authenticator had been removed. It hadn’t. The authenticator was still active on my phone - it was the recovery phone I had removed. Google apparently conflated the two.
This is a massive bug here. I was also surprised recently that Google won't let you enroll multiple Authenticators. If we had functional security regulations I think there would be some pretty large fines for Google's error here.
I have never consciously wrapped Axios or fetch, but a cursory search suggests that there was a time when it was impossible for either to force TLS1.3. It's easy to imagine alternate implementations exist for frivolous reasons, but sometimes there are hard security or performance requirements that force you into them.
Do browsers and Electron apps magically take up less memory on Macs? What is "good enough?" I never notice problems on my 16GB Windows laptop, so just for fun I closed all of my 6 always-on Electron-type apps, all of the 10 browser windows I had open, a couple other ever-present apps, and it looks like without anything else Windows 10 takes about 4GB, which I think is in the same ballpark as OS X. And I probably have some stuff running that I didn't close, this is very unscientific.
Anecdotally also, my one laptop that I've upgraded to Windows 11 is a lot snappier. As a rule I haven't noticed memory pressure on any device I've owned ever as a "regular user," it only really applies to gaming and heavy development with lots of VMs, especially these days.
reply