Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.
In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:
> In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".
But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?
A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]
OMG, some of those are legit good. That said the AI seems minimally guidable. It seems to ignore three majority of instructions in https://suno.com/song/25b16ab7-bfea-451d-abb3-8b52cdd783d0?s... so I guess like most tools, it's fine if you want to get what you're given but not really control it.
A generalized llm prompting library for clojure, and seeing what falls out from that. I wanted something which was fun to use in an interactive way, but not too abstracted.
I don't let it execute emacs lisp itself, but elisp generated in org mode babel blocks which is instantly executable is a fine way to have gptel improve itself.
reply