Hacker Newsnew | past | comments | ask | show | jobs | submit | crumpled's commentslogin

This article seems to just be a dig at IBM without bringing any receipts or adding any substance. "A colleague told me that they said..."

Honestly, why should I even believe it?


Most users aren't even going to know that this is here. Web developers will expose this capability to the user. The devs will have to determine if the model is delivering what they need.

It's good to have something to work with if these Web APIs are going to be part of a standard. I suppose this means that ALL the browser vendors are likely to implement something


> I suppose this means that ALL the browser vendors are likely to implement something

Mozilla has taken a strong stand against the prompt api.


I had to dig around to see where they took the stand.

https://github.com/mozilla/standards-positions/issues/1213#i...

Mozilla makes great points. Even if the API is model agnostic, which it ought to be designed as from the very beginning to even be considered a spec, models can act vastly different.

Mozilla didn't say this but the user should at least be presented an option to choose which model (at least once) starting from day one, even if your browser only has one option available. That's assuming a universe where Google plans on actually being concerned about standards adoption.


Mozilla classically has taken a very strong stand against ever holding true to their values so we'll probably get one from them in a few months.

So now we're up to 6 GB

Per user

Probably because the articles are talking about how the AI will be used in immoral ways, and that the people who know that and continue doing the work must be morally compromised.

I know that there might be $several ways those highly-paid engineers might still rationalize their work. Some of them might have ideological reasons to treat entire classes of people as unworthy of life. Within the model of their ideologies, the most evil things might be perfectly moral.

I wonder what reasons you have to disagree with people's moral stance against using AI as a weapon.


In absoluteness you lose your credibility except when rallying people to arms.

I stand by it. I'm not including all Google employees, ofc – there are some fantastic projects coming out of there – just the people working on their AI systems which will be accessible to the government with (effectively) no oversight.

I actually don't think it's so nuanced. We know (from its spat with Anthropic) that the government wants the ability to use AI to implement mass surveillance of Americans and fully autonomous killings. We also have ample data that this administration takes the law as a mere suggestion. It's imperative not to make their abuses easier.

Google's researchers aren't stuck there; their skills are in extraordinary demand and I'm sure Anthropic, for example, would hire them in an instant.


I haven't seen this discussed here yet. Wikipedia has decided to deprecate archive.today links because the site has been observed using the browsers of visitors to conduct a DDOS attack on the blog of an individual. Perhaps more troubling, archive.today modified archived versions of articles to insert the individuals name into the context of the articles.


Previously, nearly a thousand comments across several stories: https://hn.algolia.com/?q=archive.today


I'm a bit out of my depth in this comment section (not an arch user, am a novice docker admin). But, to me, this comment is eye-opening.


Does anyone know the status of this whole release. The metadata was hosted, and now not hosted. I saw a torrent leaked of unpopular tracks.

No statements or blogs from AA explaining the metadata removal, or an updated release timeline.

Can anyone say more?


They did describe why briefly on reddit, saying it drew too much attention: https://torrentfreak.com/images/aaconf.png


I guess I'm a little surprised that they felt this release was too hot. I'm not really surprised at the response from the music industry, but rather I thought AA was more confident in their opsec/safety from this sort of threat.


I would assume it's moreso they don't want to lose their domains too quickly. Though they've only given one sentence to go off, so it's hard to speculate.


"...the company whose entire brand is AI safety"

Absolutely no AI company is trying to take up this banner. Examples of Claude being used in all kinds of nefarious ways are surfacing all the time, and all those human operators are still current customers. Anthropic has very little to say on the matter.

The idea that Anthropic is a "safety" brand suggests that AI companies operate in a much lower realm of morality.


If you use Aurora Store instead of the Play store, you can download APKs. They are a Google Play store proxy.



Is there a way to conver that xapk format to apk other than installing their app?


yes, unzip it.


I have many apps that refuse to work. They try to open play store app which does not have logged in account.

The app doesn't work


The aurora store will identify whether apps require google play services before you try to install them.


I was concerned about that too. Often when you tell them not to do something, you were better off not mentioning it in the first place. It's like they get fixated.


Don't think of a pink elephant.


Best way I've found not to think of a pink elephant is to choose to think of a green rabbit. Really focus on the mental image of the green rabbit... and voila, you're not thinking of, what was it again? Eh, not as important as this green rabbit I'm focusing on.

How to translate that to LLM world, though, is a question I don't know the answer to.

P.S. Obviously that won't prevent you from having that first mental flash of a pink elephant prompted by reading the words. The green-rabbit technique is more for not dwelling on thoughts you want to get out of your head. Can't prevent them from flashing in, but can prevent them from sticking around by choosing to focus on something else.


> Best way I've found not to think of a pink elephant is to choose to think of a green rabbit.

Seems easy circumventable: “Don’t think of a green rabbit”. Now the past vividness of that image becomes a hindrance.


The green rabbit, in this case, is a metaphor for something you want to think of, as opposed to the pink elephant you're trying not to think about. Let's say you're trying to get your mind off of some depressing topic (the pink elephant). Instead of thinking "Don't think about the depressing topic, don't think about the depressing topic" which just makes your mind dwell on it, you pick some other topic that you do want to let your mind dwell on. Specifics will vary wildly between people, but you might decide to think about your next hobby project, or the upcoming movie or sports event or concert you're excited about, or a particularly interesting passage in the book you just read which would reward some deep thought. You'd pick something good, positive, or uplifting; something you know will improve your mental health rather than harm it.

If that's the green rabbit in the metaphor, then at no point would "don't think of a green rabbit" be advice you would want to follow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: