Hacker Newsnew | past | comments | ask | show | jobs | submit | anticorporate's commentslogin

> me being a bit ADHD

Wait, what article?


that was my experience too!

I've had to change my video and music consumption habits, because I fall asleep fairly often with either music or videos playing in the background (bad habit, I know). I'm always sure to switch to a playlist running locally when I get tired, because I'll be damned if someone's slop is going to get monetized while I sleep and the algorithm starts sneaking that crap in.

I run internal services on DO that I've considered moving to Hetzner for cost savings.

Could I take it down for the afternoon? Sure. Or could I wait and do it after hours? Also sure. But would I rather not have to deal with complaints from users that day and still go home by 5pm? Of course!


> The store exists, but employee didn't show up to open it.

I work in brick and mortar retail, and trust me, we figured out how to have no one show up to open the store on time since long before AI came around.


That's actually the best hypothesis I've heard to date.

My immediate reaction to anything someone says they're using OpenClaw for is "That's great, but it would have taken the same amount of effort to ask your LLM to write a script to do the same thing, which would be better in every possible way."

My approach to automation projects is just about the polar opposite of something like OpenClaw. How can I take this messy real-world thing and turn it into structured data? How can I build an API for the thing that doesn't have one? How can I define rules and configuration in a way that I can understand more about how something is working instead of less? How can I build a dashboard or other interface so I can see exactly the information I want to see instead of having to read a bunch of text?

It wasn't really until people started building things with coding assistants that I even saw the value in LLMs, because I realized they could speed up the rate at which I can build tools for my team to get things OUT of chat and INTO structured data with clean interfaces and deterministic behavior.


> "That's great, but it would have taken the same amount of effort to ask your LLM to write a script to do the same thing

As a no-longer-Claw-user, hard disagree. The convenience is being able to ask it to do something while I'm grocery shopping and have it automatically test it etc. Sure, I can set up Claude Code or some other tool similarly, but the majority of us aren't going to take the time to set it up to do what OpenClaw does out of the box.

I had OpenClaw do a lot of stuff for me in the 2-3 weeks I used it than I have with pi/Claude since I stopped using it.


Genuine question, why did you stop using it!

Edit: ah, scrolled down where you answered, thanks


Such as?

Lots of simple one offs. Stuff like "Here's the URL for a forum thread that has 10 pages of messages. Go through all of them and tell me if information X is in there." Or "Here is the site to after school activities. Check it once a day and notify me if there is anything that begins in March."

Also, got it to give me the weather information I always want - I've not found a weather app that does it and would always have to go to a site and click, click, click.

I can add TODOs to my todo list that's sitting on my home PC (I don't have todos on the cloud or phone).

All of these can be vibe coded, but each one would take more effort than just telling OpenClaw to do it.


So what I'm doing now is I have termux on my phone with a persistent tmux session that is SSHed into my desktop over tailscale. It stays open all the time with Claude Code running on it in a VM in yolo mode.

If I want to ask it to do something like research and add tasks to my schedule I just tap on termux on my phone, I'm already at the Claude prompt and I just type in or voice dictate in what I want. Claude via skills and MCPs can do literally anything on my computer or connected accounts.

I'm literally not sure what I would use something like openclaw for as every time someone describes it to me it's already something I can do in this system I set up in 20 minutes. Is there something I'm missing here?


Would be interesting to see an OpenClaw feature that offers to vibe code a deterministic script for scheduled or frequently repeating tasks.

Deterministic scheduled scripts for repeating tasks is where a lot of the value lives, that's basically what Atmita (atmita.com) is built around. You describe the job in plain English, pick when it runs, and it runs on a schedule. Cloud-hosted with managed OAuth to the apps you already use, API-first so you can wire anything else in directly. Not based on OpenClaw, built from scratch.

These are actually really great examples, because I've done several similar things with a more code-based deterministic approach, still utilizing an LLM.

I also have a number of sites that I query regularly with LLMs, but I use a combination of RSS and crawlers to pull the info into a RAG and query that, and have found the built-in agent loops in my LLM to be sufficient for one-offs.

I also hate most weather apps, so I have a weather section on my Home Assistant dashboard that pulls exactly what I want from the sources I want that my LLM helped me configure.

I also have my main todo list hosted on a computer at home, but since all of my machines including my phone are on the same virtual wireguard network, I use my editor of choice to read and write todos on any device I use as if it were a local file, and again, it's something my local LLM has access to for context.

I don't think either approach is wrong, but I much prefer being able to have something to debug if it doesn't behave the way I expect it to. And maybe part of the reason I'm skeptical of the hype is that a lot of the parts of this setup aren't novel to me: I had crawlers and RSS readers and a weather dashboard and private access to a well-organized filesystem across devices before LLMs were a thing - the only difference now is that I'm asking a machine to help me configure it instead of relying on a mix of documentation, example code, and obscure Reddit posts.


This is a good description of the role of software engineer in the age of LLMs.

Most people still don’t think this way and need a software person to know enough about these things to describe them to the LLM.


That was the one I struggled the most with.

I generally mean social invitations sincerely, and expect that other people do too, but also my social anxiety leaves me somewhat relieved if we don't follow through.


I think where the analogy falls apart isn't whether or not there may be expired products on the shelves, it's whether or not the store will make a reasonable effort to make it right and either refund your money or replace the product with one that's known to be good. Most will.

Source: I ditched the techpocalypse at the end of 2024 and now happily work at a grocery store.


I'd offer an edit that the most important skill may be knowing when the agent is wrong.

There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work.


Some of us live off-grid. Some of us live on-grid but the grid is unstable (or we think it may become unstable in the future). Some of us do not have the same time/effort versus money tradeoffs that you do. Some of us want to reduce our emissions more than others. Some of us just really enjoy optimizing.


There may be financial value in being early (if you're lucky), but there are other values in waiting.

My goal in life is not to maximize financial return, it's to maximize my impact on things I care about. I try to stay comfortable enough financially to have the luxury to make the decisions that allow me to keep doing things I care about when the opportunities come along.

Deciding whether something new is the right path for me usually takes a little time to assess where it's headed and what the impacts may be.


> My goal in life is not to maximize financial return, it's to maximize my impact on things I care about.

In the vast majority of cases, financial returns help maximize your impact on the things you care about. Arguably in most cases it's more effective for you to provide the financing and direction but not be directly involved. That's why the EA guys are off beng quants.

The only real exceptions are things that specifically require you personally, like investing time with your family, or developing yourself in some way.


I knew this canned rebuttal was coming and almost addressed it in my previous comment.

I've not found this to be true at all, for a variety of reasons. One of my moral principles that extreme wealth accumulation by any individual is ultimately harmful to society, even for those who start with altruistic values. Money is power, and power corrupts.

Also, the further from my immediate circle I focus my impact on, the less certainty I have that my impact is achieving what I want it to. I've worked on global projects, and looking back at them those are the projects I'm least certain moved the needle in the direction I wanted them to. Not because they didn't achieve their goals, but because I'm not sure the goals at the outset actually had the long term impact I wanted them to. In fact, it's often due to precisely what we're talking about in this thread: sometimes new things come along and change everything.

The butterfly effect is just as real with altruism as it is with anything else.


But you're not supposed to accumulate the wealth, you're supposed to forward it to your elected causes.


Being a quant is inherently accumulating and growing someone's wealth for them, even if it's not your own.

If there were a way to be a true Robin Hood and only extract wealth from the wealthy and redistribute that to poor, I'd call that a noble cause, although finance is not my field (nor is crime, for that matter) so it's not for me.

My chosen wealth multiplier is working at a community-owned cooperative, building the wealth for others directly.


Not sure about this because many charities are designed to spend their income, rather than hoard it. A big part of choosing which charity to donate to is, or should be, how effective they are in spending what you give them.


I mean, I'm not arguing that if you can find a way to make a large amount of money in an ethical way without enriching yourself or the wealthy further and then find a way to accurately evaluate charities to maximize impact, that you shouldn't do that. But there are several very difficult problems embedded in that path, and I could easily sees just solving all of those problems becoming a full-time job by itself.

I also, candidly, haven't ever seen anyone successfully do that.


I want to cure lung cancer, therefore as an Effective Altruist™ I maximize my income by selling cigarettes to children outside playgrounds. The money will go towards research in my will, and in the meantime the incidence of lung cancer in teenagers will incentivize the free market to find a cure!

People don't become quants because they are EAs, they become EAs to justify to themselves why they became quants.


Being a quant is not that interesting and if you're not redirecting the money you're not really an EA, are you?

Your first paragraph is just a standard response to utilitarianism, although a poor one because it doesn't consider EV.

Nonetheless I'm not quite sure why merely mentioning EA draws out all these irrelevant replies about it. It was incidental, not an endorsement of EA.


I didn't realize maximizing money is the way to achieve moral excellence. It's interesting how Puritanical the EA folks are


There is no moral excellence but which you invent for yourself. But given the first principle or goal of 'having the most impact', maximizing money is often quite useful.


Or, utilitarianism


> Arguably in most cases it's more effective for you to provide the financing and direction but not be directly involved. That's why the EA guys are off beng quants.

The EA guys aren't the final word on ethics or a fulfilling life.

Ursula K. Le Guin wrote that one might, rather than seeking to always better one's life, instead seek to share the burden others are holding.

Making a bunch of money to turn around and spend on mosquito nets might seem to be making the world better, but on the other hand it also normalizes and enshrines the systems of oppression and injustice that created a world where someone can make 300,000$ a year typing "that didn't work, try again" into claude while someone else watches another family member die of malaria because they couldn't afford meds.


Nobody is asking about ethics or a fulfilling life. We are talking about maximum _impact_.


Impact only has meaning per a chosen framework to measure within. For example, if I apply my ethical system to measure the impact of an EA, they have essentially no impact, since all they do is perpetuate a system that is the root of the problems they're trying to solve.


To be frank that anti-system logic sounds a lot like. "Why are you taking a shower when there are people dying of thirst in a desert logic? Plumbing is an inherently unjust system for giving more water to those who already have enough!".

Yes there are flaws in the system, but smugly opting out of it and declaring yourself morally superior isn't helpful. Instead you need to actually do the work of understanding the system, its virtues and flaws before you can propose changes that would actually improve things.


Plumbing doesn't harm the people in the desert. Plumbing isn't an inherent bad.

The system of imperialism that enables some to starve while others eat is inherently bad and is propped up and legitimized when you act within its framework.

Adding plumbing to your house isn't saying "it's normal that people are dying of thirst." Structuring your impact around donations is, meanwhile, saying "though this system results in people starving while others throw away half their food, we can only solve these problems by working really hard within the rules this system defines, and then lending aid within the rules this system defines." After all, there's only one way to make money enough to be "impactful..."

This is a slightly tangential example, I don't want to be mistaken that I'm saying they're equivalent: Buying and freeing slaves is not a good form of activism when trying to overthrow slavery. It's doing the exact opposite: upholding the institution of slavery with every purchase. Legitimizing it and even in fact funding it. You tell yourself you're at least slightly reducing harm but in reality you're motivating slave catchers to go find more people to slave - and meanwhile btw you're doing nothing to address the fact that slave catchers in your own country are just grabbing the slaves you freed.

The only truly ethical choice for activism against slavery is to break chains and use violence against anyone that prevents you from breaking chains.

Again, not exactly equivalent, just an example of how "helping" can actually prop up the thing you think you're trying to take down.


> The only real exceptions are things that specifically require you personally, like investing time with your family, or developing yourself in some way.

So, the things that matter the most for most people?

Studies pretty consistently show that happiness caps off at relatively modest wealth.


That's not their stated goal. Their stated goal is to maximize impact, not their own happiness.


Impact is nebulous. For example, Zuckerberg has had impact but it’s been almost entirely negative. The world is a worse place for him having existed.


It being signed doesn't make it nebulous.


> That's why the EA guys are off beng quants.

Or in prison for fraud.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: