Most users aren't even going to know that this is here. Web developers will expose this capability to the user. The devs will have to determine if the model is delivering what they need.
It's good to have something to work with if these Web APIs are going to be part of a standard. I suppose this means that ALL the browser vendors are likely to implement something
Mozilla makes great points. Even if the API is model agnostic, which it ought to be designed as from the very beginning to even be considered a spec, models can act vastly different.
Mozilla didn't say this but the user should at least be presented an option to choose which model (at least once) starting from day one, even if your browser only has one option available. That's assuming a universe where Google plans on actually being concerned about standards adoption.
Probably because the articles are talking about how the AI will be used in immoral ways, and that the people who know that and continue doing the work must be morally compromised.
I know that there might be $several ways those highly-paid engineers might still rationalize their work.
Some of them might have ideological reasons to treat entire classes of people as unworthy of life. Within the model of their ideologies, the most evil things might be perfectly moral.
I wonder what reasons you have to disagree with people's moral stance against using AI as a weapon.
I stand by it. I'm not including all Google employees, ofc – there are some fantastic projects coming out of there – just the people working on their AI systems which will be accessible to the government with (effectively) no oversight.
I actually don't think it's so nuanced. We know (from its spat with Anthropic) that the government wants the ability to use AI to implement mass surveillance of Americans and fully autonomous killings. We also have ample data that this administration takes the law as a mere suggestion. It's imperative not to make their abuses easier.
Google's researchers aren't stuck there; their skills are in extraordinary demand and I'm sure Anthropic, for example, would hire them in an instant.
I haven't seen this discussed here yet. Wikipedia has decided to deprecate archive.today links because the site has been observed using the browsers of visitors to conduct a DDOS attack on the blog of an individual. Perhaps more troubling, archive.today modified archived versions of articles to insert the individuals name into the context of the articles.
I guess I'm a little surprised that they felt this release was too hot. I'm not really surprised at the response from the music industry, but rather I thought AA was more confident in their opsec/safety from this sort of threat.
I would assume it's moreso they don't want to lose their domains too quickly. Though they've only given one sentence to go off, so it's hard to speculate.
Absolutely no AI company is trying to take up this banner. Examples of Claude being used in all kinds of nefarious ways are surfacing all the time, and all those human operators are still current customers. Anthropic has very little to say on the matter.
The idea that Anthropic is a "safety" brand suggests that AI companies operate in a much lower realm of morality.
I was concerned about that too.
Often when you tell them not to do something, you were better off not mentioning it in the first place. It's like they get fixated.
Best way I've found not to think of a pink elephant is to choose to think of a green rabbit. Really focus on the mental image of the green rabbit... and voila, you're not thinking of, what was it again? Eh, not as important as this green rabbit I'm focusing on.
How to translate that to LLM world, though, is a question I don't know the answer to.
P.S. Obviously that won't prevent you from having that first mental flash of a pink elephant prompted by reading the words. The green-rabbit technique is more for not dwelling on thoughts you want to get out of your head. Can't prevent them from flashing in, but can prevent them from sticking around by choosing to focus on something else.
The green rabbit, in this case, is a metaphor for something you want to think of, as opposed to the pink elephant you're trying not to think about. Let's say you're trying to get your mind off of some depressing topic (the pink elephant). Instead of thinking "Don't think about the depressing topic, don't think about the depressing topic" which just makes your mind dwell on it, you pick some other topic that you do want to let your mind dwell on. Specifics will vary wildly between people, but you might decide to think about your next hobby project, or the upcoming movie or sports event or concert you're excited about, or a particularly interesting passage in the book you just read which would reward some deep thought. You'd pick something good, positive, or uplifting; something you know will improve your mental health rather than harm it.
If that's the green rabbit in the metaphor, then at no point would "don't think of a green rabbit" be advice you would want to follow.
Honestly, why should I even believe it?
reply