I can understand the wishes to make LLMs even more self driven. After all that's the idea of a lose prompt. No matter how short, LLM figures out what most users are expecting. Thanks to RLHL it accomplishes wonders.
My desire though is to be able to steer the model exactly where I want. Assuming token cost isn't an issue, it doesn't remove the need for costly review. I would rather think first and polish up my ability to provide input.
I do not want an LLM to deep think, in most cases. Why not letting me disable deep thinking altogether. That's where engineers are likely heading: control.
The recently viral 'grill-me' skill is great for exactly this.
It's just a super simple skill that, when invoked, makes the model spend considerable time asking design and architecture questions and fleshing out any plan with you. A planning session without it might be Claude asking you 2 questions, and with it 22.
I suspect this is part of the reason why gemini 3.1 pro is insanely good on AiStudio and pretty bad on the gemini app. I have thousands of small videos to convert to detailed descriptions and I'm using a super detailed system prompt. It works perfect either via api or Aistudio. I tried doing a gem on the gemini app using the same prompt as the gem instructions and I just can't get the same results. So, the issue might be not just the rlhl but also the massive system prompts injected on the app interface
My desire though is to be able to steer the model exactly where I want. Assuming token cost isn't an issue, it doesn't remove the need for costly review. I would rather think first and polish up my ability to provide input.
I do not want an LLM to deep think, in most cases. Why not letting me disable deep thinking altogether. That's where engineers are likely heading: control.