Hacker Newsnew | past | comments | ask | show | jobs | submit | pavel_lishin's commentslogin

Is it even possible to get into anything resembling such a state? Do managers enter flow state?

I've tried using agentic development for something I understood well, and every time, it's frankly fucking sucked. Even if the output was good - which it usually wasn't - it just didn't feel like the same type of work I actually want to do.

Things like line-completion is fine, though - except comments; I wish I could tell VSCode's Copilot to never write a line of human thought.


It's always very fun to run into one of these serendipitously.

I'm not sure I understand the point of this; is the intended user of this an insurance company whose agents approve or deny claims?

If so, is there any actual incentive for them to allow doctors & patients to follow up with this sort of paper trail?


Both, the insurer installs this to produced portable evidence that end users can consume. Would mainly be the startups building AI agents that interact with insurers on behalf of hospitals. If a hospital needs to process 100,000 claims made by agents and verify them, its their records pitted with vendor logs. This lets the vendor hand over a folder the hospital verifies independently with openssl. No vendor trust required. The incentive: customers (clinics) can process deterministic evidence independent of the vendor or tool (my code) and the vendors agents build up a track record of trust by themselves without engineers spending their time triaging.

[2022]

Pull request to notify on setup (2 weeks old): https://github.com/mastodon/mastodon/pull/38548

You would be fired if you had a thought?

Root-level comment has been edited as noted by respondant lynndotpy <https://news.ycombinator.com/item?id=47785027> and original author Geee <https://news.ycombinator.com/item?id=47785316>.

If I had the thought, in general, that this was a fine thing to do, then yes. Presumably I would do it or permit somebody else to do it and be fired.

Losing your business and income isn't the same as being charged with a crime and jailed.

I upgraded from my buggy, annoying Ender 3 Pro to a Bambu A1, and it's been pretty wonderful so far. I haven't had any need to "babysit" it, and I can trust it to just start a print, and finish it when I get back. It self-levels the bed, etc.

I got the most basic model - a single feed for filament, etc. I recommend it.

People are right that you shouldn't spend too much money, but don't spend too little, either. If you think to yourself, "Well, $300 is a lot for a 3D printer, I'll just get an Ender 3 for $200, or a used Ender 3 for $100", you'll end up getting significantly more frustrated if all you want to do is 3D print things.


For $300 you can get an A1 mini and it's a pretty solidly engineered printer. We're running them until they break. But they don't break...

I second this. Occasionally the head will jam, but it is easy to clean them out. But, the A1 mini is the first device that really just works. It's so much fun.

Do not be an idiot like I was and try to print in an outdoor atrium to avoid fumes. That's really not an issue these days and humidity will kill your filament. For many reasons related to humidity control, it is useful to invest in a humidity monitoring filament holder.


I got the A1 - I knew I would want to print bigger things eventually, and spending an extra ~$100 seemed like a no-brainer.

I also always get the bigger disk on phones, etc.


> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.

I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.

Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.


I assume if they get fired by the AI during the experiment they are still paid to sit at home. It would not invalidate the experiment.

Why do you assume that?

it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.

The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.

People can delegate

sure. but in this case, having the ai delegate to humans for any important task sort of undermines the entire premise.

You can still wear eye protection during the safety test...

I don't think we need to have real human risk to get results from the experiment.


The article mentions:

“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”

which was refreshing to read.


Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."

Personally I find the entire tone of the article to be creepy and disturbing.


I take that to mean "we won't let the AI refuse to pay them or otherwise break employment law" not that they could never be fired.

I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.

They could, in theory, have contracts that say the AI can't fire them.

It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.

I doubt the experiment is set up that way, but that would be an ethical way to do it.


There’s no way they are putting that into a contract. HRs are already using it to fire people.

"This specific AI can't fire anyone without human review, because it's experimental" is something you could easily add.

At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.

That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.

But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".

And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.


What thing?

What's the funny thing?

Just a garden variety racist pretending they live in the state

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: