Hacker Newsnew | past | comments | ask | show | jobs | submit | ben8bit's commentslogin

Let it go. This is clearly a vibe-coded site, the fonts, layout, all look anthroposized. If Mythos was really so good, then they would not share it with anyone.


Some good points, but as a whole - I'm not sure if I agree. Sketch lost to Figma because of it's design tooling & multiplayer. Physical products still get designed before being constructed - I don't see that going away. If anything, I think Figma should stop trying to play both sides of the field and decide what it wants to be.

> Sketch lost to Figma because of it's design tooling & multiplayer.

Or maybe because you could just send a Figma link to anyone in your org and it opened in the browser vs having to tell them to download some Mac app and open a specific file that will get outdated over time.


Correct, though I believe the parent comment covered that under the broad interpretation of “multiplayer”

Any recommendations on good open ones? What are you using primarily?

LMArena actually has a nice Pareto distribution of ELO vs price for this

  model                        elo   $/M
  ---------------------------------------
  glm-5.1                      1538  2.60
  glm-4.7                      1440  1.41
  minimax-m2.7                 1422  0.97
  minimax-m2.1-preview         1392  0.78
  minimax-m2.5                 1386  0.77
  deepseek-v3.2-thinking       1369  0.38
  mimo-v2-flash (non-thinking) 1337  0.24
https://arena.ai/leaderboard/code?viewBy=plot&license=open-s...

LMArena isn't very useful as a benchmark, however I can vouch for the fact that GLM 5.1 is astonishingly good. Several people I know who have a $100/mo Claude Code subscription are considering cancelling it and going all in on GLM, because it's finally gotten (for them) comparable to Opus 4.5/6. I don't use Opus myself, but I can definitely say that the jump from the (imvho) previous best open weight model Kimi K2.5 to this is otherworldly — and K2.5 was already a huge jump itself!

qwen3.5/3.6 (30B) works well,locally, with opencode

Mind you, a 30B model (3B active) is not going to be comparable to Opus. There are open models that are near-SOTA but they are ~750B-1T total params. That's going to require substantial infrastructure if you want to use them agentically, scaled up even further if you expect quick real-time response for at least some fraction of that work. (Your only hope of getting reasonable utilization out of local hardware in single-user or few-users scenarios is to always have something useful cranking in the background during downtime.)

For a business with ten or more engineers/people-using-ai, it might still make sense to set this up. For an individual though, I can’t imagine you’d make it through to positive ROI before the hardware ages out.

It's hard to tell for sure because the local inference engines/frameworks we have today are not really that capable. We have barely started exploring the implications of SSD offload, saving KV-caches to storage for reuse, setting up distributed inference in multi-GPU setups or over the network, making use of specialty hardware such as NPUs etc. All of these can reuse fairly ordinary, run-of-the-mill hardware.

Since you need at least a few of H100 class hardware, I guess you need at least few tens of coders to justify the costs.

I see the 512GB Mac Studios aren’t for sale anymore but that was a much cheaper path

I'm backing up a big dataset onto tapes, so I wanted to automate it. I have an idle 64Gb VRAM setup in my basement, so I decided to experiment and tasked it with writing an LTFS implementation. LTFS is an open standard for filesystems for tapes, and there's an implementation in C that can be used as the baseline.

So far, Qwen 3.6 created a functionally equivalent Golang implementation that works against the flat file backend within the last 2 days. I'm extremely impressed.


It is surprisingly competent. It's not Opus 4.6 but it works well for well structured tasks.

What near SOTA open models are you referring to?

I want to bump this more than just a +1 by recommending everyone try out OpenCode. It can still run on a Codex subscription so you aren’t in fully unfamiliar territory but unlocks a lot of options.

The Codex TUI harness is also open source and you can use open models with it, so you can stay in even more familiar territory.

pi-coding-agent (pi.dev) is also great. I've been using it with Gemma 4 and Qwen 3.6.

The thing I dislike about OpenCode is the lack of capabilities of their editor, also, resource intensive, for some reason on a VM it chuckles each 30 mins, that I need to discard all sessions, commits, etc.

I don't know if it is bun related, but in task manager, is the thing that is almost at the top always on CPU usage, turns out for me, bun is not production ready at all.

Wish Zed editor had something like BigPickle which is free to use without limits.


> turns out for me, bun is not production ready

What issue did you run into?


Is this sort of setup tenable on a consumer MBP or similar?

Qwen’s 30B models run great on my MBP (M4, 48GB) but the issue I have is cooling - the fan exhaust is straight onto the screen, which I can’t help thinking will eventually degrade it, given the thermal cycling it would go through. A Mac Studio makes far more sense for local inference just for this reason alone.

For a 30B model, you want at least 20GB of VRAM and a 24GB MBP can’t quite allocate that much of it to VRAM. So you’d want at least a 32GB MBP.

I have 24GB VRAM available and haven't yet found a decent model or combination. Last one I tried is Qwen with continue, I guess I need to spend more time on this.

Is there any model that practically compares to Sonnet 4.6 in code and vision and runs on home-grade (12G-24G) cards?

im currently running a custom Gemma4 26b MoE model on my 24gb m2... super fast and it beat deepseek, chatgpt, and gemini in 3 different puzzles/code challenges I tested it on. the issue now is the low context... I can only do 2048 tokens with my vram... the gap is slowly closing on the frontier models

It's a MoE model so I'd assume a cheaper MBP would simply result in some experts staying on CPU? And those would still have a sizeable fraction of the unified memory bandwidth available.

I haven’t tried this myself yet but you would still need enough non-vram ram available to the cpu to offload to cpu, right? This is a fully novice question, I have not ever tried it.

You're correct. If you don't have enough RAM for the model, it can still run but most of it will run on the CPU and be continuously reloaded from the SSD (through mmap).

A medium MoE like 35B can still achieve usable speeds in that setup, mind you, depending on what you're doing.


The Mac Minis (probably 64GB RAM) are the most cost effective.

How are you running it with opencode, any tips/pointers on the setup?

GLM 5.1 via an infra provider. Running a competent coding capable model yourself isn't viable unless your standards are quite low.

What infra providers are there?

There's DeepInfra. There's also OpenRouter where you can find several providers.

I am using GLM 5.1 and MiniMax 2.7.

Makes me think the model could actually not even be smarter necessarily, just more token dependent.

Asking a seller to sell less.

That's an incentive difficult to reconcile with the user's benefit.

To keep this business running they do need to invest to make the best model, period.

It happens to be exactly what Anthropic's strategy is. That and great tooling.


But they're clearly oversubscribed, massively.

And they're selling less and less (suddenly 5 hour window lasts 1 hour on the similar tasks it lasted 5 hours a week ago), so IMO they're scamming.

I hope many people are making notes and will raise heat soon.


I agree. I'm rather pointing out the whole strategy dictates the outcome.

Anthropic has to keep racing ahead and be stamped offering the best frontier models.

It isn't optimal, so the models cost them disproportionately too much to sell at a profitable price. So they keep feeding the hype and push the costs higher, hoping there won't be too much heat and get away with it.

I wouldn't like to be a leader at such company, but their pay keep them in line.


Unless you want something that looks like it's designed by Anthropic, this is still pretty shit. Amazingly "AI" hasn't replaced the very first target on their radar - design.

Design is very hard to verbally describe, and AI doesn't have good judgement on what is easy to use or attractive.

I think it's because it's non-deterministic too. You can't iteratively improve design the same way you can code.

If they wanted couldn't they do something like RLHF? Instead of humans picking the best of 2 text outputs, they pick the best rendered design

I'd be very surprised if they're not already doing this.

Working on Fronteer, a project management app that (1) integrates messaging more cohesively with tasks and (2) better supports external collaborators - think agency clients, customers, etc.

Some of the biggest pain points we’ve seen is chat being separate from a solid task manager, and the pain of collaborating with people outside your own org.

We’re currently in private beta and hope to open it up to the general public soon!

https://fronteer.app


A lot of the magic of LLMs, I think, has been tarnished by these CEOs and other FAANG companies. It might have been a far more interesting world if they didn't bring "AI" or "AGI" into the conversation in such a politicized way.

The power of the tool itself will be overshadowed by the motivations of its real owner. I can be both impressed by its ability to empower me, and be scared of the fact that the tools will change hands sooner or later and be deployed at scale to serve a goal I cannot, at minimum, support.

When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.

In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.


> the tools will change hands sooner or later and be deployed at scale to serve a goal I cannot, at minimum, support

Personally, the tools don't need to change hands at all. They are already in the hands of people who are deploying them at a scale to serve goals I cannot and do not support

The people running AI companies right now are some of the most evil motherfuckers on the planet


> The power of the tool itself will be overshadowed by the motivations of its real owner.

Not only that, but by how blatantly and openly these owners are discussing the tool's power. They are publicly crooning about their product's ability to replace workers. It's the first line of their sales pitch. And also, their customers (business CEOs) are publicly crooning about how awesome it is that they can reduce their headcount! Both the AI producers and their customers are absolutely bragging about worker displacement, and not a single guillotine has been constructed in the streets yet.


It'd be nice if they didn't use the term at all because I don't think they're useful relevant or real.

If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.


On the other hand, "magical new systems that provide almost unlimited capacity for intelligent work" is probably a more functional mental model. Genie can give you 1000 wishes till you reach your session limit.

Not quite 1000 on Codex as of last day or two!

It would have been better if they didn't bootstrap it off the outright theft of a very large amount of IP only to lock it behind a paywall.

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam, Dune

It’s the inevitable result of valuations based on hype and future potential, not business fundamentals. It incentivizes companies to be as hyperbolic as possible with their pitches and marketing.

Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.


Magic or no, ultimately "AI" leads to labour displacement and it's just a continuation of the much broader trend of automation driven by computers.

Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.

It was always going to be met with violence once it became more than a curiosity for tinkerers.


We have, as a civilization, two paths before us:

a) Decouple the value of human life from labour.

b) Watch as the value of human life rapidly approaches zero.

---

Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!

A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.

(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)


> We have, as a civilization, two paths before us

You forgot C: Butlerian Jihad. mass outlaw AI research, AI usage, AI building, AI infrastructure, on penalty of death

It may not be a good option but it's there


This will literally never happen so it is not worth considering

Just keep telling everyone that and hope they keep believing you.

Exactly. At the very least, we should be treating AI like nuclear weapons. It can exist but it should be locked away and never used.

When the value of human labour reaches zero the economy will collapse so that will be interesting.

I don't see that as a guaranteed outcome if there's something like UBI to sustain demand, and automation to sustain supply.

UBI is only valuable if money is valuable though...what are you going to trade it for if no one has a job and everyone has access to super powerful production tools like advanced LLMs (which are at the low end of automated tooling overall)?

Idk, food and shelter?

"Vibe coding a house" will increase housing supply, (and automation at every step of supply chain too), but the costs will still be nonzero.


But just like labour, commodities are also going to go heavily down, exactly because of the automation you describe.

In addition, it's all fine and good to say "people will still need to buy food clothes an shelter"

But what is the price gonna be? Who can afford it if no one has a job? Who's going to sell it and to who? For what profit?


> in a world that ties purpose to work is an existential threat on a very practical level.

I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).

If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).

The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.


> The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist.

Think of the alternative, though: If we planned for a soft landing and implemented safety nets and started transitioning ourselves to a society where people didn't have to work to survive, then a few trillion dollar companies would make slightly less profit every year. We simply cannot allow that. Won't someone think of those trillion dollar companies for a minute?


^^^^

>Labour displacement leads to an erosion of standards of living

The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.


The Industrial Revolution increased labor hours by two or three times, depending on circumstances. In the sense that they reduced the time for life (leisure) versus the time spent being a cog in the wheel of an industrial system (labor), it certainly eroded living standards.

For a very specific example: the cotton gin likely increased the demand for slave labor in the American South, leading to harsher conditions for slaves, increased acrimony between slaveholders and abolitionists, and eventually the Civil War (the decimation of the Southern economy, the pivot of Northern society to a war footing w/ associated disruptions, and 600,000 Americans dead).


The agricultural and industrial revolutions "weren't labor displacement", they were technological and social changes that happened unevenly and gradually in time and space and which resulted in labor displacement, but they were not the only cause, and they didn't happen BECAUSE of labor displacement. I would argue the subsequent labor displacement caused a minor part of the social gains to be later distributed and realized through class struggle, but that's beside the point. Most wars cause mass labor displacement and military technological advancements that later translate into society as a whole. Are you prepared to argue for wars? If you are American, you are experiencing firsthand the effects of what once was a major part of your industrial labor being absorbed by China. It has led to massive inequality and erosion of standards of living in the US. Not so much for the Chinese working class, which has increasingly improved their standards of living. Are you going to argue for it? I think if we only look at things from a limited perspective, and in this instance a technocratic and teleologic view of history, as in history has a designed finality and this finality will be achieved through unrestrained development of production forces, you are bound to quietly take part in the destruction of society and nature, now viewed as externalities, and accept the worst of atrocities in the name of "advancement", while most of any gains are captured in the short term by a minority.

> Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.

The mass evictions of the Scottish Highlands [1] in which peasants were driven at the point of bayonets to the lowland city slums to make way for the British government to transform Scotland into a mass sheep/wool production monoculture economy.

The use of kidnapped Africans as slaves in the Americas was also an example of a labor displacement - by introducing a source of mass human labor with absolutely no human rights - to scale the agricultural commodity economy (cotton, tobacco, sugar), which resulted in horrendous living standards for the enslaved, and an erosion for the poor paid peasants whose labor they replaced. Slavery was a very "efficient" way to use labor.

1. https://en.wikipedia.org/wiki/Highland_Clearances


AI is different. It promises to be able to do everything humans can, but better and more cheaply. When AIs can do every human job cheaper than the subsistence cost of employing a human, humans will be economically obsolete and worthless.

Then there's the minor issue of AI deciding to just wipe us out because we're in the way.

Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.


Before the industrial revolution, approximately 90% of people worked in agriculture. In fully industrialized countries, that figure is now <2%. That decrease constituted a nearly full replacement of everything humans were doing, better and more cheaply. While this time might be different, I don't think this is a given.

Maybe it’s not a given, but it is part of the sales pitch for CEOs. A few others have announced layoffs due to AI being better and more efficient than humans.

How much truth there is to it we don’t know for sure. But it’s not something to be ignored.


CEOs have been saying the exact same thing for the entire history of automation. Take computing, for example, an industry that's always been unusually amenable to automation:

— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.

— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory, and rich standard libraries mean they don't have to continuously reimplement common data structures from scratch."

— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."

While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.

While this time with AI may truly be different, I'm not holding my breath.

[0] http://catb.org/jargon/html/story-of-mel.html


> AI is different

Literally the same thing.

> humans will be economically obsolete and worthless

Only if we are talking about a socialist system (and they are making pretty small progress in the field of AI). A human's value under a capitalist system is equal to their ability to create goods and services. And AI cannot make this ability smaller in any way.

A people's well-being is literally the goods and services created by that people. How can it decrease if the people's ability to produce those goods and services is not hindered in any way?

So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits. The main danger is a descent into socialism, with all these basic incomes, taxation out of production, and other practices that would lead to people being declared economically obsolete and mass executed to optimize their carbon footprint or something.


> A human's value under a capitalist system is equal to their ability to create goods and services. And AI cannot make this ability smaller in any way.

Yes they can. Your ability to produce goods and services depends on the infrastructure around you. When that's all run by AIs for AIs, humans won't be able to compete.

See that land over there producing food you need to eat? It turns out it's more economically efficient to pave it over with data centers etc.

Under a US-style capitalist system the rich (i.e. the AIs and AI-run businesses) control politics, the courts, etc, so the decisions the system makes will favour AIs over humans.

> So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits

...to the AI-run companies!

> The main danger is a descent into socialism, with all these basic incomes

Without UBI most people (or maybe everyone) would starve.


> depends on the infrastructure around you

Yeah, and who is creating those infrastructure? Jesus? This is the same part of goods and services.

> When that's all run by AIs for AIs, humans won't be able to compete.

So what? The ability to produce goods and services (and therefore general well-being) will not decrease because of that.

> It turns out it's more economically efficient to pave it over with data centers etc

By the way, a good argument against your position. Agricultural land is very cheap, but the vast majority of people who believe AI will put people out of work and worsen overall well-being are for some reason reluctant to buy this asset, which would see a catastrophic increase in value under such a scenario. So these people are either incapable of analyzing the economic processes, and their predictions are worthless, or they don’t really believe in such a scenario.

> will favour AIs over humans

Let me repeat: it does not reduce the ability to create goods and services. Under capitalism, this is the only characteristic that determines people's well-being.

> ...to the AI-run companies!

I think this is a fairly unlikely scenario. But even in this very unlikely case, people's well-being will not be reduced. Simply because of the mechanisms of creating well-being.

> Without UBI most people (or maybe everyone) would starve.

Economic theory (and 20th-century economic practice) demonstrates the exact opposite. In every country that attempted to effectively implement UBI, it led to a sharp decline in production and mass starvation. Literally every single time.


the loom hurt English weavers (flat wages for 50 years) and decimated India's textile trade. also, Rust Belt deindustrialization.


If ai benefitted everyone and not just the billionaires we would be viewing it differently.

That's a truism. But it ignores The Iron Law of Oligarchy, Pareto Principle, and dozens more that remind us that power tends towards centralization. It's currently fashionable to call out the billionaires, but if you removed them, they'd just be replaced by corrupt government officials, or something else.

That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.


More importantly we shouldn't deny the rest of humanity benefits on the basis that the majority of the benefit accrues to the powerful. We should strive to change the distribution pattern, not remove the benefit.

>we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.

Not to put too fine a point on it but this was basically how the Japanese post war economic miracle was achieved.

In this case it was America which ordered the Japanese oligarchy to be stripped of its wealth.

We've had decades of propaganda telling us that this is the worst thing we could do for economic growth though so it's natural to doubt.


The problem with billionaires is that they are able to hoard so much money by exploiting others. We would be much better off if billionaires weren't given so much advantage by Capitalism as those resources would be much more useful if distributed.

The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).


“But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.”

You’re right. Instead of implying, we should be taking active steps to do it.


Right, giving up is actually how these things end up becoming principles/laws. Power centralizes because people become complacent and ignorant on matters of power, so there ends up being a power vacuum, to which others seize the opportunity. But absolute power centralization almost never occurs, due to the delegation that is necessary to wield that power in practice, and so these two forces end up balancing each other. As such, the equilibrium point (or point of maximum entropy) ends up being some type of oligarchy. But anyone can take steps to address this and adjust this equilibrium point, but it takes active work.

Unfortunately, this is the only way to get enough venture capital to support the compute needs for this kind of technology. Who is going to spend hundreds on billions on a vague idea without regular claims that this will upend the existing economy in six to twelve months and whoever owns it will become unfathomably rich? And despite all the actual developments we have seen going against that idea, investors keep falling for it. This will continue until it crashes, one way or another. The question is how long it can build up and how deep the fall will be. LLMs will certainly change the economy in the end, but so did mortgage backed securities.

It's a sad indictment of our society that there is always a shortage of money for medical care, infrastructure, housing, food stamps and space exploration but always a surplus of cash for war and tools that purport to replace the workforce.

There will always be a shortage of money for medical care. The dirty secret of social medicine is that a small percentage of the population are essentially unhappy utility monsters [1] who gain little or no benefit no matter how many resources are poured into treating them.

[1] https://en.wikipedia.org/wiki/Utility_monster


There isn’t really a shortage of money for those things, just rampant levels of fraud, corruption, and incompetence in the government to make those things artificially expensive. California spends so much money on high speed rail and gets 0 feet of track because they’re not paying for track; the whole thing is a scam where the politicians give taxpayer money to their political supporters in exchange for political support. Defense isn’t immune to this either; Boeing, which builds a shitty heavy lift rocket out of Space Shuttle spare parts and delivers it late and over budget, pulls the exact same bullshit with their defense contracts, and there’s always some shitty Senator siding with them against the American people whenever anyone gets upset.

The opportunity cost to society of performative model training is stunning - 400M for a grok training run to dominate the charts for 2 weeks

The current British government should be a shining beacon for you! Its welfare bill actually outstrips national income by far. Britain's pathetic defense capabilities cannot even see off Russian warships that intimidate by deliberately hanging around British waters assessing our vital undersea cabling. The UK government has now asked France if it can help deter these ships. Tangentially - I should add that even with their massive expenditure on the National Health System (NHS) it's not enough and too many people feel that they have to go abroad to get life-saving operations and procedures. If they can afford it of course. But sure, that is another matter. As far as I can tell, there seems to be pretty much an apolitical consensus on both areas.

Curous how france manages to have enough resources to protect its own waters, help the UK protect theirs, AND have free universal healthcare...

> It's a sad indictment of our society that there is always a shortage of money for medical care...

It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.

However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.

While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.


War accelerated evolution, it’s why it exists.

So did compassion, probably in a greater amount. And yet the greater amount of resources goes into war at the expense of compassion.

Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.


Yes, the social darwinist approach inevitably lead to eugenical thinking and the human meat grinder that follows. We, as being with the capacity to understand harmful v. non-harmful behaviour, have a consequence to harmful behaviour, collectively: human suffering and the suppression of freedom.

>Humanity has taken control of its own evolution

Has it taken full control of it or just partial control?


You have cause and effect mixed up.

Were you around for the first release of GPT? It was not the CEOs that were kvetching about being paperclipped by AGI

I don't want to stir up the hornet's nest here, but in my humble opinion the entire problem rests on the unabated and unchecked modern and "late-stage" capitalism model, championed by the U.S. and since exported to and sprung good root everywhere else, even in Europe where it as of yet has a few more checks and balances (which unsurprisingly draws a lot of ire from its acolytes and priests across the Atlantic).

Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.

Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".

The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.

Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.

There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.


Which part of societal model you find inferior? I thought it was mostly economics and bureaucracy.


stealing and reusing the work of thousands of people as your own is magic now?

Do they not know how to make backups?

making a copy is not stealing, but presenting / selling someone else's work as your own is. especially when it's done on that scale by big corps and in an automated way

No it’s tarnished by becoming too popular. Just like how people hated nickelback, if you remember.

> A practical loop for training taste

Taste is cheap. Taste (or a rudimentary version of it at least) is something you start with at the beginning of your career. Taste is the thing that tells you "this is fucking cool", or "I don't know why but this just looks right". LLM's are not going to replicate that because it's not a human and taste isn't something you can make. Now - MAKING something that "looks right" is hard, and because LLM's are churning out the middle - the middle is moving somewhere else. Just like rich people during the summer.


Taste comes from childhood imo. It is not something that magically appears in adulthood.

It was always there, albeit laying dormant until one does something that requires it.


I would... actually maybe agree with this. I'm not certain it can be taught!


We use them for a couple of things - very happy. I think probably the best reason (other than service robustness): support. CloudFlare is great until it's not, and you aren't paying $$$ for enterprise support. This is probably one of the most underrated reasons to switch to any lesser known (but still rock solid) infra services. UpCloud too - great support!


Funny story, but I didn't realise I much I didn't want an Apple Watch, until I got one. I exercise daily and most days I just want it to shut up.


I’ve disabled all activity notifications. It actually helps me avoid the phone on other type of notifications.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: