Thanks, I am very new to this and just run models in LMStudio. I think it would be very useful to have a system prompt telling the model to run python scripts to calculate things LLMs are particularly bad at and run those scripts. Can you recommend a harness that you like to use? I suppose safety of these solutions is its own can of worms, but I am willing to try it.
These are typically coding oriented as opposed to general chat, so their system prompts may be needlessly heavy for that use case. I think the closest thing to a general solution is the emerging "claw" ecosystem, as silly as that sounds. Some of the newer "claws" do provide proper sandboxing.
I've got an interesting hack brewing for extremely hassle free tool orchestration - basically think along the lines of .bash_profile level simplicity... Maybe I'll get that out tomorrow
I always wondered why there isn't something like tinyscheme built into the tools so the models could eval simple math and code using a reasonable language that can do accurate math but gives no access to anything outside.