Is it even possible to get into anything resembling such a state? Do managers enter flow state?
I've tried using agentic development for something I understood well, and every time, it's frankly fucking sucked. Even if the output was good - which it usually wasn't - it just didn't feel like the same type of work I actually want to do.
Things like line-completion is fine, though - except comments; I wish I could tell VSCode's Copilot to never write a line of human thought.
Both, the insurer installs this to produced portable evidence that end users can consume. Would mainly be the startups building AI agents that interact with insurers on behalf of hospitals.
If a hospital needs to process 100,000 claims made by agents and verify them, its their records pitted with vendor logs. This lets the vendor hand over a folder the hospital verifies independently with openssl. No vendor trust required.
The incentive: customers (clinics) can process deterministic evidence independent of the vendor or tool (my code) and the vendors agents build up a track record of trust by themselves without engineers spending their time triaging.
I upgraded from my buggy, annoying Ender 3 Pro to a Bambu A1, and it's been pretty wonderful so far. I haven't had any need to "babysit" it, and I can trust it to just start a print, and finish it when I get back. It self-levels the bed, etc.
I got the most basic model - a single feed for filament, etc. I recommend it.
People are right that you shouldn't spend too much money, but don't spend too little, either. If you think to yourself, "Well, $300 is a lot for a 3D printer, I'll just get an Ender 3 for $200, or a used Ender 3 for $100", you'll end up getting significantly more frustrated if all you want to do is 3D print things.
I second this. Occasionally the head will jam, but it is easy to clean them out. But, the A1 mini is the first device that really just works. It's so much fun.
Do not be an idiot like I was and try to print in an outdoor atrium to avoid fumes. That's really not an issue these days and humidity will kill your filament. For many reasons related to humidity control, it is useful to invest in a humidity monitoring filament holder.
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.
The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.
“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”
Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."
Personally I find the entire tone of the article to be creepy and disturbing.
I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.
It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.
I doubt the experiment is set up that way, but that would be an ethical way to do it.
At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.
That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.
But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".
And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.
I've tried using agentic development for something I understood well, and every time, it's frankly fucking sucked. Even if the output was good - which it usually wasn't - it just didn't feel like the same type of work I actually want to do.
Things like line-completion is fine, though - except comments; I wish I could tell VSCode's Copilot to never write a line of human thought.
reply