Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Read the Microsoft research paper that came out last week, focus specifically on the section that contains the following...

"Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."



I've seen that paper, and it's amazing indeed what GPT-4 is capable of. But none of that supports that closing quote, which to me points to a worrying philosophical naïveté. How can one equip an entity with "intrinsic motivation"? By definition, if we have to equip it with it, that motivation is extrinsic. It belongs to the one who puts it there.

A software engineer might decide to prompt his creation with something along the lines of "Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky, and over every living creature that moves on the ground." and the bot will duly and with limitless enthusiasm put those wise orders into practice. If that ends up causing some "minor" problems, should we confine the software for one thousand years in an air-gapped enclosure, until it learns a bitter lesson? Or should we take to task the careless Maker of the bot?

To me, the answer is obvious.

PS: this video to me states the problem in a nutshell (that whole channel is gold) https://www.youtube.com/watch?v=hEUO6pjwFOo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: