> For minutes at a time this kind of thing would be running through my head: ‘He pushed the door open and entered the room. A yellow beam of sunlight, filtering through the muslin curtains, slanted on to the table, where a matchbox, half-open, lay beside the inkpot. With his right hand in his pocket he moved across to the window. Down in the street a tortoiseshell cat was chasing a dead leaf,’ etc., etc. This habit continued until I was about twenty-five, right through my non-literary years. Although I had to search, and did search, for the right words, I seemed to be making this descriptive effort almost against my will, under a kind of compulsion from outside.
This is fascinating and totally alien to my experience. I don't often think in words at all unless I am preparing to either write or speak them.
I have a constant droning monologue that only stops when I sleep or meditate. But I also know at least one author who doesn't think in words at all, even when preparing to write or speak them.
> Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.
As opposed to what? This is a little facetious, but what could it possibly mean to have creative direction and editorial judgement without human involvement?
Presumably we're talking about image generated by a diffusion model or something, but further, an image which is generated without being edited by any human. The prompt used to generate the image isn't written by a human, and it can't really be based on the contents of the (human authored and edited) article either. No human may select the service or model used, and once generated the image is published sight unseen without being reviewed by any human.
If some kind of agentic AI does any of these things it is one which appears ex nihilo, spontaneously appearing without being created or directed by any human.
There's a good post from Aurich in the comments of the article detailing the practical reality of how they (don't) use AI tools in their image work, but as a policy statement this sentence is 100% vibes, 0% actual guidance or restriction
It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.
I agree with the sentiment, it is good to care, it is admirable and perhaps virtuous to care.
But it is not cool to care. Cool does mean detached, offhand, poised, aloof, unperturbed. That's why it's called "cool".
We don't need to hijack the term and pretend that it's cool to be enthusiastic and dorky and to talk too loudly when we get excited about something. The point is that those things are good even if they're not cool.
The term has been around since the 1930s, and like all old slang that started out niche and went mainstream, it has been generalized and had its meaning diluted to the point of near-meaninglessness. There are readings of it where it and its opposite could describe the same thing, i.e. uncool and cool are the same. (Easy test to gage meaninglessness.)
I use it as a generic marker of approval or assent similar to "OK."
No, that's myth that we "uncool kids" fall into without realizing it.
Cool means you are confident. You're unbothered by what other people say because you have faith in yourself, you like yourself.
It does not mean "I don't care about anything." It just so happens that the average person doesn't care a lot about "things," so the average cool person doesn't either.
cool means you appear confident, attractive, and socially effortless even if there is effort behind the scenes. the meaning has been stretched to mean "socially desirable" which is a slightly broader category
Confident, attractive and socially effortless are all highly socially desirable traits in a friend or mate. This is obvious to anyone who isn't socially problematic. I can tell you've never been cool!
Who's stretching the definition of "cool?" Cool is a social convention, not a scientific term.
The bottom line is, "people" don't like being around anxiety-riddled individuals with social hangups. They like people who are "cool."
The worst thing you can do if you want to be cool, is get upset about how other people are telling you you aren't.
"cool!" (especially for the uncool) has always meant something that resonates emotionally with you - keep using it that way.
"cool..." means the flat, disconnected response - we don't need any more of that.
This post is definitely about the former, and we can double down by not letting the wet blankets in the comments use the latter to tell us "you're doing it wrong".
> Cool does mean detached, offhand, poised, aloof, unperturbed.
Cool is from “Old English col "not warm" (but usually not as severe as cold)”. If you’re using it to mean detached you’ve already accepted that words can change meanings (add, remove, or modify) over time. ;)
I wonder what criticisms could be leveled at the virtues of proactive energy, passion, and incessant curiosity. I notice that they make me feel slightly nauseous. This is something I'm curious about. What really is a dork?
Someone who might not be very smart per se, but whose mind has not yet been murdered and replaced by a sort of mandatory prosthesis made out of complex conditioned responses (after the operation, they'd be more properly an ork).
Yeah, turns out the uncanny valley effect works in both directions.
The Twentieth Anniversary Macintosh came to be regarded as such a mistake and quintessential example of how misguided Apple was during the wilderness era that I'm not surprised they went in the opposite direction. Institutional memory etc etc
> The New Mexico attorney general’s office created multiple fake Facebook and Instagram profiles posing as children as part of its investigation into Meta. Those test accounts encountered sexually suggestive content and requests to share pornographic content, the suit alleges.
> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
They have been under a lot of pressure for years to disable e2e messaging because it prevents them from monitoring messages for child abuse. This was a central point of the trial. While they haven't given a reason for the change I think its reasonable to infer it is in response to this pressure.
However there is another possible explanation
> Tom Sulston, head of policy at Digital Rights Watch, said rather than acceding to law enforcement demands, the move was more likely due to Meta deciding against moving messaging on WhatsApp, Facebook and Instagram to a single platform.
This is the funniest possible answer to "Sounds like you've never changed a tire. Or at least not outside of a very controlled environment."
"Oh, you think I've never changed a tire? Well here is my abstract high level understanding of the steps to changing a tire! And have you considered the quintessential controlled environment for putting tires onto cars?"
The "edge cases" make a simple task like this more difficult. What if the nuts are stripped? What if the terrain under/around the car is uneven or not solid ground? What if it's raining or snowing or hailing? What if the driver of the car is irrationally upset and kicks your tire-changing robot over? What if a tire change was requested, but it's clear (to a human) that there is more work that needs to be done?
Every new successful tool doesn't start by trying to meet every need or edge case. They perfect the main case, and then edge cases in priority of likelihood.
Car washes are automated even though they haven't answered the edge cases of how to wash your car when your car is rolled on its side or a terrorist is actively blowing up the equipment. They simply only operate when your car is right side up (and other conditions, like in neutral, wipers off, and a driver who is willing to not exit the vehicle) and when there aren't active bombings on the building. And other "edge" cases.
Just because there is a possibility for something to not work, doesn't make it useless. Automated tire replacements could start with very rigid cases where they are applicable, and expact the scope slowly to allow more cases, like a bent wheel or poor weather.
Then you see a mechanic for the 5% of cases where it's weird? If you think AI is replacing 100% of software engineers anytime soon, idk what to tell you.
This is fascinating and totally alien to my experience. I don't often think in words at all unless I am preparing to either write or speak them.
reply