Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> To take an example, asking ChatGPT to write a professional sounding resignation letter might look absurd to people who value writing in itself, but the people doing it are still expressing a very specific intent: "I quit".

But they're not expressing other very specific intents like going out of their way to assure the team that the new opportunity was unmissable or pointedly thanking some people but not others or being deliberately terse because frankly the whole experience was a bit shit. (and might if they're not careful be expressing particular intents they specifically don't wish to convey)

And ChatGPT itself generally gives the most verbose answer possible, because people training it reward verbose and somewhat ambiguous responses more than clear but not quite right answers, so I suspect its impact on prose norms will be the exact opposite...



I think that's really the crux of it.

We currently require people to express the full extent of their intentions. Not writing that you feel so sorry for your colleagues your are leaving in the dust is a faux pas. ChatGPT fills that gap of generating random acceptable feelings, where you might either not care, or do not want to express your actual ones.

I'd argue adding people to thank or sneaky attacks to the ones you hate wouldn't add much to the prompt when requesting the letter to the bot.

The joke of course will be that progressively, I'd expect we'll have people sending the bot a prompt, have it converted into verbiage, and the receiving end tl;dr it in the same bot to get basically the initial prompt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: