Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which is fine. If all AI does is represent human knowledge in a way that makes it explainable and transformable rather than merely searchable, then the hype is justified... along with Google's howling, terrified panic.

The role played by humans on the training side is of little interest when considering the technology from a user's perspective.



The problem is my back and forth with Claude is just Claude's data not available to any other. Unlike stack overflow which is fair game for every AI.


I think the most interesting aspect of it is the human training. Human blindsides, dogma, ignorance, etc. All on demand and faster than you can validate its accuracy or utility. This is good.


Shrug... I don't know what anyone expected, once humans got involved. Like all of us (and all of our tools), AI is vulnerable to human flaws.


I think that’s really important to reinforce! You probably know better, but lots of the less technical people I talk to don’t think that way. It’s not at all obvious to an observer who doesn’t know how this stuff works that a computer could be racist or misogynist.


Yeah, I do think that's going to be a problem.

Years ago, my GF asked me why we bother with judges and juries, given all the uneven sentencing practices and other issues with the current legal system. "Why can't the courts run on computers?" This was back in the pre-Alpha Go era, so when I answered her, I focused on technical reasons why Computers Can't Do That... reasons that are all basically obsolete now, or soon will be.

The real answer lies in the original premise of her question: because Humans Also Can't Do That with the degree of accuracy and accountability that she was asking for. Our laws simply aren't compatible with perfect mechanized jurisprudence and enforcement. Code may be law, but law isn't code.

That problem exists in a lot of areas where people will be looking to AI to save us from our own faults. Again, this has little to do with how training is conducted, or how humans participate in it. Just getting the racism and misogyny out of the training data isn't going to be enough.


Also: It's not just about what task can/can't can't be done, but what other frameworks you/can't build around the executor to detect errors and handle exceptional cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: