Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm basically an LLM. I trained in a similar way as the LLM, off of books and open source code, guessing what comes next, making errors, adjusting my brain. I make money the same way as the LLM, off of inference calls.

Maybe you don't truly understand how an LLM works, or is trained, or how inference works. Or how humans work. Money goes out during training, money comes in during inference.



I understand how they work. If you’re going to make bad-faith arguments, we can drop it here.


Pointing out you are wrong isn't bad faith.

Several people have noted in this thread that you've missed a pretty simple fact.


You claimed you’re “basically an LLM” so at this point it seems accurate. You’re mostly incorrect and spouting nonsense, nailed it!


Most don't consider making an analogy "bad faith".

https://en.wikipedia.org/wiki/Analogy

Even ChatGPT understands that money changes hands in both cases after training/learning:

https://chatgpt.com/share/525aabbb-1fcc-4b1d-a88e-34206c8f5c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: