Wouldn't know if I'm a smart writer, but I see little value in writing with a model if that's what you're asking. Language models are good for searching, getting alright at structured outputs like code, and trash at meaningfully expressing my thoughts in prose. Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance
>Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance
it concerns you because you have a good authority over your spoken language, and like most people with those skills I presume the language flows easily from you.
that ability isn't guaranteed, for a lot of people expression is tough, and those people felt equally alienated when confronted with an essay of word salad about why their opinion is wrong.
An LLM is a tool. In the 90s I would read columns and editorials about the disgusting faux pas of replying to a wedding invitation via such a cheap trendy medium like internet e-mail , now you receive death certificates that way.
It's not all bad, simpletons can use LLMs to have the critiquing essays turned into 5 word ELI5 statements that they can become enraged over once all the nuance is stripped. That's fun!
Sure, it's a tool that I think that's not a particularly compelling use of. Like I can at least see an endpoint of slop code where the right guardrails and model improvements create a means by which people can ask their computers to do things in natural language, and semantic search is genuinely a novel and powerful capability. Maybe we even get other nice translation protocols to structured forms of language. But in a context where the premise is that we're trying to communicate with other humans, using a model that generates plausible prose is a mechanism that obfuscates rather than clarifies. I don't think it's fit to purpose for that thing any more than a hammer makes a good screwdriver. If it helps you to bounce your ideas off an LLM, by all means do so, but this will mostly just serve to homogenize the writing of everyone doing that. Possibly of value to some people, but not to me
LLM's are in principle text in / text out machines. If the user extends its capability to have agency over a production database or a machine, there's nothing that can safeguard the safety.
Imagine I ask an LLM to instruct left/right/speed up/slow down while driving. I can simply bypass any safeguard by stating i suddenly became blind while driving a car. While in fact i'm blindfolded and doing an experiment on a highway.
Such 'highly damaged' brain is still 90 percent or more structured the same as a normal human brain. See it as a brain that runs in debug mode.
It is known that the narrative part of the brain is separate from the decision taking brain. If someone asks you, in a very convincing, persuasive way, why you did something a year ago and you can't clearly remember you did, it can happen that you become positive that you did so anyway. And then the mind just hallucinates a reason. That's a trait of brains.
> If someone asks you, in a very convincing, persuasive way, why you did something a year ago and you can't clearly remember you did, it can happen that you become positive that you did so anyway. And then the mind just hallucinates a reason. That's a trait of brains.
Yes brains can hallucinate reasons, doesn't mean they always do. If all reasons given were hallucinations then introspection would be impossible, but clearly introspection do help people.
A lot of people voted for someone who was known to be an evil crook. It was very clear that he got into politics for praising his own ego. They voted against 'the good' in the hope for their own benefit and against that of the world. If they did not 'expect' the current state of affairs then they just refused to listen to their own heart.
> In my experience, taking into account the opinions of such people has been the worst mistake of my life. I'm still working on the means to fix its consequences, as much as they are fixable at all.
This sounds very cryptic. Can you give an example?
TL;DR: Probably because I'm having fun and you are expending effort. Hope you find what I say to be worth the effort.
To preface, I do not take offense to your remark, because you seem to be asking in good faith.
(If, however, being unable to immediately recognize pre-known patterns in my speech had automagically led you to the conclusion that I am somehow out of line, just for speaking how I speak ... well, then we woulda hadda problemo! But we don't, chill on.)
So, honest question deserves honest answer.
The short of it is: English sux.
Many many many people, much much much smarter than me (and much better compensated too!) have been working throughout modernity to make it literally impossible to express much of anything interesting in English.
(Well, not without either being a fictional character or sounding batshit insane, anyway! But that joke's entirely on "the Them": I am not only entirely fictional, but have an equal amount of experience being batshit insane in my native language and in the present lingua franca. So, consider all I say cognitohazardous and watch out for colors you ain't seen before, dawg!)
Linguistic hegemony is the thing that LLMs are the steroids for - surfuckingprise! - and that's why your commanders love 'em.
As opposed to programming languages, which your superiors loathe and your peers viscerally refuse to acknowledge, because those are the exact opposite thing: descending from mathemathical notation, and being evaluated by a machine, they have the useful property of being incapable of expressing lies and nonsense.
Direct computing confers what you could call bullshit-resistance. That property is a treasure underappreciated by virtue of its unfamiliarity, and one which we are in the process of being robbed of.
I also want to admit that linguistic hegemony isn't all downside: English is great for technical and instrumental knowledge - especially with elided bells and whistles (adverbs, copula, etc.)
But then life ain't all business, izzet?
Imagine you have a partner who wants to have a conversation about feelings and interpersonal relations; and not even in a scary way, right? So you sit and talk about stuff, and your partner does this thing where they keep switching from your shared native tongue to English mid-sentence, in order to be able to talk about such things better, because your native tongue does not have - no, not only the established words and notions! - it doesn't have the basic grammatical constructs for expressing simple things unambiguously, so if you were to attempt the same conversation in nativelang you'd end up battling it out with proverbs and anodyne canards ripped from propaganda repertoire of the prior regime.
Fun, no?
As an exercise, try imagining what notions are absent from modern English. And don't forget to remain vigilant. Love from our table to your table!
When talking about feelings, we now and then throw in an English word because some things are expressed in much less words when using English. In a few cases even a German word. Überhaupt is for example a word for which i do not know an alternative in any language.
I think you want to say that human language is too ambiguous for clear communication between human and machine. The machine might mis interpret what you write. Classic computer languages leave no room for interpretation.
For the rest I think you might be a little lost. That is okay, so many of us are. I wish you all the best.
>I think you want to say that human language is too ambiguous for clear communication between human and machine. [...]
If that is what I wanted to say, I figure I would not have had much difficulty with saying exactly it - and not something else.
Except I fail to see the purpose of making that statement.
Maybe to have some people say "it is true! I agree with what the balamatom is saying"?
Again - to what end? How would that agreement be of use to me?
Why say something which both speaker and listener have already heard a thousand times? To get a cracker and be called pretty?
And have I lost the author, or have I lost the reader, or we all so lost that it doesn't matter how lost each is? Maybe one day we will all become so lost that it will once again begin to matter where exactly we are! Counting on it.
My experience is that it tries to look at your situation in an objective way, and tries to help you to analyse your thoughts and actions. It comes across as very empathetic though, so there can lie a danger if you are easily persuaded into seeing it as a friend.
That is so reductive of an analysis that it is almost worthless. Technically true, but very unhelpful in terms of using an LLM.
It is a first principle though so it helps to “stir the context windows pot” by having it pull in research and other shit on the web that will help ground it and not just tell you exactly what you prompt it to say.
Hmmmm i didn't know that... so a machine is not human is your point? Look, i know it doesn't try, just like a sorting algo does not try to sort, or an article does not try to convey an opinion and a law does not try to make society more organized.
It is much easier to share personal feelings with an llm, i found. Also it tried to keep me happy to get the conversation going, but for me it feels mostly 'objective' or the most socially acceptable advice, e. g. keeping a good relationship is more important than trying a new one with someone else because you 'feel something' around them. For me it tried to find out together the sources or causes of that feeling, e.g. you recognize parts of yourself in someone else or in the past you had very good or very bad experiences around an encounter.
Weird, i am using copilot and it steers me mostly towards self reflection and tries to look at things objectively. It is very friendly and comes across as empathetic, to not hurt your feelings, that is probably baked in to keep the conversation going...
reply