Hacker Newsnew | past | comments | ask | show | jobs | submit | onion2k's commentslogin

The scientists are being killed by space fireballs!? This is conspiracy bigger than I thought!

given that you can read the code, stars seem to be a completely pointless proxy

Imagine you're choosing between 3 different alternatives, and each is 100,000 LOC. Is 'reading the code' really an option? You need a proxy.

Stars isn't a good one because it's an untrusted source. Something like a referral would be much better, but in a space where your network doesn't have much knowledge a proxy like stars is the only option.


> Is 'reading the code' really an option? You need a proxy.

100k is small, but you're right, it can be millions. I usually skim through the code tho, and it's not that hard. I don't need to fully read and understand the code.

What I look at is: high-level architecture (is there any, is it modular or one big lump of code, how modular it is, what kind of modules and components it has and how they interact), code quality (structuring, naming, aesthetics), bus factor (how many people contribute and understand the code base).


Ask Claude to help. Read the dang code. You'll be more confident in your decision and better positioned to handle any issues you encounter.

It's too much work, so you decide to trust the opinion of someone else who probably also hasn't done the work.

I don't think I have ever even considered using star count as a factor for picking from alternatives.

Looking at the commit history, closed vs open issues and pull requests provides a much more useful signal if you can't decide from the code.


It would be nice if there was a whitelist option for non-sensitive content.

There's no such thing as non-sensitive content from a CDN though. Scripts are obviously sensitive, styles can be used to exfiltrate data through background-url directives, and anything like images has no benefit being cached across sites.

Fonts might be one exception, but I bet those are exploitable somehow.


Given the nature of just how nasty bromine is, I imagine air freight would not be legal over any populated flight corridor. That'll make it impossible to fly into Korea.

I don't believe route limitation for dangerous goods is a thing. I looked on https://www.iata.org/en/publications/newsletters/iata-knowle...

Interestingly I asked both Claude and ChatGPT "does the Infectious Substances Shipping Regulations include anything about what routes for airfreight are allowed?" and it flagged it and wouldn't respond, although switching to Sonnet 4 allowed Claude to answer.


ChatGPT has gotten 'sensitive', almost unusable in last week. That seems like a simple question, and it refused. It's done same to me, on very simple generic questions. It somehow infers something much more nefarious, then refuses.

It doesn't infer anything, you just hit a blacklisted token

Given the importance of DRAM, I imagine they would get their own plane if required.

The issue isn't the plane. It's being allowed to fly over places where people live.

No better time than now to get into suborbital cargo freight business.

Boeing makes the minuteman 3, maybe now is a good time to invest.

You assume what they say is the same as what they are thinking

The converse is also true. People saying something assume that people listening are understanding and thinking about the same thing. This is why it's important to write things down in details and as-unambiguous-as-you-can forms.

If you're in a meeting and someone puts up a slide deck with a 6 word bullet point that 'explains' what they want, that is a signal that literally no one understands the goal. If they put in a meeting without writing a one page doc about it, they don't understand it well enough to explain it.

And if your progression hangs off delivering that thing, you should by demanding that you get a clearer picture.


You also need to force them to justify their requirements, since asking for something way beyond what you actually need is an easy way to hide the fact that they don't understand what they actually need.

In my experience, people like that asking for 10x the actual requirement is fairly usual. But, every once in a while you hear someone say "we should buy the best, so we don't have to worry about it in the future" (when I heard it, that was a 500x cost difference).


> You also need to force them to justify their requirements, since asking for something way beyond what you actually need is an easy way to hide the fact that they don't understand what they actually need.

I've had a lot of success by shortcutting the refinement/request sessions with clients by simply asking "What is it you need to do?".

Due to an esclation from a client, I joined a meeting with a dev and a client to try and figure out why a single report is taking over a month to deliver. Dev reported (privately) to me that the client keeps changing their mind about what needs to be in the final report.

When I finally asked the client my magic question, it turns out they may not even need that specific report anyway - they're just not sure what can be retrieved, so they wanted one single report for every single thing they may want to do, now and in the future, attempting to squish hierarchical data and tabular data from SQL queries into a single gigantic report.

There's no way the dev was ever going to have a finished report for them. I broke it down into several simpler reports, some of which already exist, which turned a very frustrated client into, well, not exactly happy, but at least they are less frustrated now. They have some of the data generated daily now, and we can do the other stuff as and when they see a need for those reports.


> about the same thing

yes. I have to keep telling my colleagues "about what?" for about 4-5 times in a row, at least twice daily, until they finally realize they have to tell me which client, feature, product or whatever else they are referring to.

Even if i know exactly what yhey are talking about.


If there is a tech-person problem it is this one. I constantly have to interrupt collegues when they try to explain a thing as their explaination attempts are usually way too low level or even bordering on being self-referential. So they explain the concept by using other concepts the listener won't understand either.

In my opinion it all boils down to a lack of ability to remember how one felt before understanding a certain concept. If you did you would have an empathic understanding of how word-salady a lot of the explainations are.

The first thing you need to tell a uninitiated person is simply where to generally put it and how they already know it. If you explain DNS for example you explain it via the domains they know and how it is like a contacts list for webservers so your browser knows where to look when looking for google.com.

Whenever you explain anything you might want to ask yourself why the other side should even begin to care and how it connects to their life and existing knowledge. What problem did it originally intend to solve?

Many tech people may start in a different


When I taught I often thought of it as explaining in a spiral: first I must go around the concept, before I can dive in to the concept. Going around gives boundary and definition to what I'm talking about, allowing people to place it in the proper spot in their mental framework and to relate it with other nearby things. It also gives some motivation for what this thing is and why they should care. Then when that is done (and it does not take long), the details can be discussed, and they're easier to communicate because people are primed to receive them.

Mine is "What's the question?"

I basically have to ask that almost every other time someone starts a conversation.

They talked for 3 minutes and never actually articulated a problem or request but just sort of recounted some seemingly random but presumably related facts.


Yep, that's a good one.

If my experience with beginning programmers generalizes to "the real world" (and I strongly suspect that it does, because the mindset non-programmers bring when they start programming has to come from somewhere) then I can also propose: "What needs to happen? What has been tried so far? What happened when that was tried, and how is that different?"


> This is why it's important to write things down in details and as-unambiguous-as-you-can forms.

While that might be a prerequisite for a deep shared understanding, I have made the experience in the last few years that the number of people really reading more than the starting sentence of any message/ticket/email is consistently decreasing. I often have to feed them the information in very small and easy to digest portions. I so dislike that.


Nobody reads the docs, tickets, or comments under a task, nobody really checks the code they are reviewing, and nowadays thanks to AI, some people don’t even read the code they “write”.

People love to ask for documentation, as long as it doesn’t exist. It lets them off the hook, “oh I would have known what to do, I wish we had this documented”. Then you point it out that you have it documented with video walkthrough, asked the team to read it and give feedback multiple times, and nobody gave a f.

Managers ask detailed questions about the IC’s tasks and priorities, only to forget it half an hour later and ask again and again.

I don’t see the point of fighting this, I’m sure I do the same to some degree. You just need to assume nobody reads anything and nobody listens or remembers anything, so be patient and explain everything every time… at least I don’t have a better strategy.


> Managers ask detailed questions about the IC’s tasks and priorities

I've told the various teams that I wouldn't have to phone anyone if they updated the ticket. When I see a ticket that has not been updated for 2 months, there's no way I'm not phoning the assigned person.

Problem is that, even when I was a f/time IC, we hardly ever update the ticket unless we feel we have made progress. An update saying "Chased bug with no success $TODAY, requested $SENIOR to consult with me on this" feels like a worthless ticket update, but from the client's PoV, this is valuable info - it means that it hasn't dropped off our radar, we haven't forgotten about it, etc.


I’ve been at it 18 years for different organisations and my experience and strategy is the same as yours.

No one bothers to read/understand anything until the very last minute, then they realise “oh shit this won’t work in this scenario, and it’s always a showstopper”…


But what about my impression that it is getting worse? When I (as a developer) was trying to help customers with the product 20 years ago, about 50% (my guesstimate) of the people were actually reading what I wrote, at least to a good degree. These days I am lucky if it is 20% who are reading answers to their problems more than in a completely superficial way. I blame social media and smartphones.

I think the quality of comms is definitely getting worse, I work for myself now and I am very selective of clients now. So I don’t butt heads against it as much, but it still happens despite best efforts, especially when people move on or even go on vacation.

Literacy skills have been falling and it shows up in testing of a lot of different countries, and it basically lines up with the arrival of iPhone/Android(or real smart phones).


In fact, people do read documentation when they know it exists and when it is reasonably written.

People often don't, but AI always does. As we rely on AI more and more, having strong docs will become even more important.

This is why I'm assuming so many people aren't averse to LLM-generated/filtered text. If you never really read or wrote what you were reading and writing anyway, the LLM can get something similar without much effort. Frustrating if you actually need something

What I say: This is not ready for production.

What management hears: We can sell this to the customer for acceptance testing.


What you say: I don’t want to take responsibility.

What management hears: I want someone else to take responsibility for me.


What I say: I want a pay rise.

What management hears: We want more pizza parties.


Nah. Our CEO just fires everyone who asks for a raise because they're a liability now.

FWIW, this is in a country that supposedly has really strong unions and worker protections.


There is a very weird and a very awesome soviet-era movie Kin-dza-dza. At one point one of the characters tells this about the other: "he says things what he does not think, and he thinks things what he does not think."

One of my favorite movies. There is a context to it though: dystopian post apocalyptical society where all planet natives are telepathic and can read each other's thoughts.

They ordered 40% of the global RAM production for 2025/26. It was a non-binding agreement that either side could easily withdraw from but they're essentially trying to buy about half of all the RAM.

Blows my mind that it's non-binding.

If I booked half a hotel's rooms then suddenly said "yeah never mind. Half my friends cancelled and we're not staying", basically any hotel would be coming at me for my money because there's no way they can fill their rooms now and they're losing revenue. But OpenAI can really get the whole world to pivot towards it then say "cool but we don't need your product anymore" and RAM makers are just going to let it go.

Whoever decided that was a good idea needs to be fired and publicly shamed.


Well if that hotel was then able to sell the other half of hotel rooms for 10x the old price. Then the hotel might actually be happy as they can now charge 10x for the other half or slowly lower prices back down over years.

it's used for price gouging by the cartel

if anything, OpenAi might be in on it


A while ago (8 years in fact) I wanted to make something like this. I made a slightly half-assed React library that used twgl to overlay a shader on an element on a page (or a canvas for some effects). One thing I had was measuring where an element was on the page and then using absolute positioning to put a canvas over it so effects could expand beyond the confines of the element. I still think that has potential for some fun things but browsers aren't really fast enough to keep it in the right place with scrolling so it never works properly. https://react-neon.ooer.com/ + https://github.com/onion2k/react-neon

These new libraries are much better than anything I came up with.


Can you elaborate on the scrolling issue?

It’s a WebGL issue, fixed in WebGPU.

Browsers generally only allow a fixed number of WebGL contexts per page. So a generic element effect library has the issue that too many elements some will start losing contexts. The workaround is to just make one large screen size canvas and then figure out where the elements are you need to draw an effect for. now you only have one context drawing all the elements. But, you can’t know where to draw until the browser scrolls and renders so you’re always one frame behind.

https://webgl2fundamentals.org/webgl/lessons/webgl-multiple-...

WebGPU doesn’t have this issue. You can use the same device with multiple canvases

https://webgpufundamentals.org/webgpu/lessons/webgpu-multipl...


It's been a while so I might be a little off, but the problem was that the effect would lag behind slightly (one frame?) because I used an observer to match where the element moved to because the overlay element was logged to the viewport. I think I did that to avoid having a canvas that was the size of the entire page. Where a canvas could just be abs positioned it was ok but for reasons I can't remember that didn't work for everything.

Not really related to this 'discussion' but this is an interesting problem in the AI space. It's essentially a well understood problem in unreliable distributed systems - if you have a series of steps that might not respond with the same answer every time (because one might fail usually) then how do you get to a useful and reliable outcome? I've been experimenting with running a prompt multiple times and having an agent diff the output to find parts that some runs missed, or having it vote on which run resulted in the best response, with a modicum of success. If you're concerned about having another layer of AI in there then getting the agents to return some structured output that you can just run through a deterministic function is an alternative.

Non-determinism is a problem that you can mitigate to some extent with a bit of effort, and is important if your AI is running without a human-in-the-loop step. If you're there prompting it though then it doesn't actually matter. If you don't get a good result just try again.


Don’t know if this is an annoying response… but how about just going through the code and check and grade the quality yourself?

I could do, but the end goal is to scale this to 100x what I can do myself, and there isn't time to review all those changes. By attempting to answer the problem when it's tiny and I can still keep it in my head then I'll end up building something that works at scale.

Maybe. The point is that this is all new, and looking forwards I think it's worth figuring out this stuff early.


The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up

Or they'll price the true cost in from the start, and make massive profits until the VC subsidies end... I know which one I'd do.


We don't know what anthropic's true costs are. So pricing that in is at best a guess.

I use Codex at home and Opus at work. They're both brilliant.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: