Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs learned from human writing. They might amplify the frequency of some particular affectations, but they didn't come up with those affectations themselves. They write like that because some people write like that.




[flagged]


Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.

That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices.

[flagged]


> AI may output certain things at a vastly different rate than it appears in the training data

That’s a subjective statement, but generally speaking, not true. If it were, LLMs would produce unintelligible text & images. The way neural networks function is fundamentally to produce data that is statistically similar to the training data. Context, prompts, and training data are what drive the style. Whatever trends you believe you’re seeing in AI can be explained by context, prompts, and training data, and isn’t an inherent part of AI.

Extra fingers are known as hallucination, so if it’s a different phenomenon, then nobody knows what you’re talking about, and you are saying your analogy to fingers doesn’t work. In the case of images, the tokens are pixels, while in the case of LLMs, the tokens are approximately syllables. Finger hallucinations are lack of larger structural understanding, but they statistically mimic the inputs and are not examples of frequency differences.


This is a bad faith argument and you know it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: