Hacker Newsnew | past | comments | ask | show | jobs | submit | Arrowmaster's commentslogin

20 some years ago when cable broadband was new, you connected a computer and got public IP. For this example let's just assume it was a public/24. Back then there was no firewall built into Windows, it didn't ask you if you were connecting to a public or private network.

For some ISPs you could connect a switch or hub (they still existed with cable came out, 1gbps switches were expensive) and connect multiple computers and they would all get different public IPs.

Back then a lot of network applications like windows filesharing heavily used the local subnet broadcast IP to announce themselves to other local computers on the network. Yes this meant when you opened up windows file sharing you might see the share from Dave's computer across town. I don't recall if the hidden always on shares like $c where widely know about at this time.

ISPs fixed this by blocking most of the traffic to and from the subnet broadcast address at the modem/headend level but for some time after I could still run a packet capture and see all the ARP packets and some other broadcasts from other models on my node, but it wasn't enough to be able to interfere with them anymore.


I understand this aspect, and this conversation is tricky because most consumer routers have this barebones firewall built in to reject the routing mentioned by the OP. So what we think of as a "router doing nat" often is subtly doing more. I'd hate to call what a barebones consumer router is doing a firewall because there are important firewall features that it does not have that are necessary for security.

Yes. Every clone of this idea does the same thing and a new one pops up every week. When I try to point out that the secrets should be exposed through file namespaces instead of ENV vars, the amount of hostility is shocking.


Honestly I was expecting more. There are many languages that support Unicode in variable or function names and I expected it to be used there.

It sounds like Python only allows approved Unicode characters to start a variable name but if it allowed any you could do something like `nonprintable = lambda x: insert exploit code here`. If that was hidden in what looked like a blank line between other additions would you catch it?

I'm sure there's some other language out there that has similar syntax and lax Unicode rules this could be used in.

The solution is that this and many other Unicode formatting characters should be ignored and converted to a visible indicator in all code views when you expect plain text.


> The solution is that this and many other Unicode formatting characters

This isn't about formating characters, this is about private use characters.


You know that QR code is just text you can read right? It's just an otpauth:// URI you can copy and paste into most password managers.

We even have these amazing things that securely share passwords or other secret data between multiple authorized users.

Seriously just scan the QR code and put it in any password manager that supports TOTP and it will start outputing codes.


Yes, I am very familiar with zbarimg and qrencode. But, other people might not be, and that's why just scanning a QR code works. Not everyone has Bitwarden, 1Password, Pass, keepass, etc.... also these tools may not be approved by your security teams.

And we are talking about the root account for your production AWS account. No need to get fancy. Just print the QR code, and put it in a safe hoping you never need it.


That's precisely why you want it in a safe.


The latest release was June 2022 and the last non dependabot commit was March 2023, until new activity 4 days ago using AI. Why should anyone use this?


Part of how the USA got that way is hilariously enough, anti-corruption policies.


I'm currently in the endless email loop because someone named Raymond used one of my Gmail names to register with State Farm. One of their agents even emails me directly when he gets really behind on his payments but won't do anything when I tell them it's the wrong email.

In the past when this happens I usually reset the password and change the email to some anon throwaway but I can't do that without Raymonds DOB (don't quote me on that, been a while since I tried).


This exact thing happened to me with a State Farm agent.

After a few months, I told them I was concerned about the privacy ramifications and would have to report it to their state insurance regulator, and it was very quickly fixed.


Not just idiots, rich idiots that will make more from the hype and publicity than we normal people could make in a few years.


This would be perfect if it also was able to expose secrets as files scoped to the process ala /run/secrets/secret_name.


The problem isn't the .env file itself but using environment variables at all to pass secrets is insecure.


I strongly disagree.

Environment variables are -by far- the securest AND most practical way to provide configuration and secrets to apps.

Any other way is less secure: files on disk, (cli)arguments, a database, etc. Or about as secure but far more complex and convoluted. I've seen enterprise hosting with a (virtual) mount (nfs, etc) that provides config files - read only - tight permissions, served from a secure vault. A lot of indirection for getting secrets into an app that will still just read them plain text. More secure than env vars? how?

Or some encrypted database/vault that the app can read from using - a shared secret provided as env var or on-disk config file.


Disagree, the best way to pass secrets is by using mount namespaces (systemd and docker do this under /run/secrets/) so that the can program can access the secrets as needed but they don't exist in the environment. The process is not complicated, many system already implement it. By keeping them out of ENV variables you no longer have to worry about the entire ENV getting written out during a crash or debugging and exposing the secrets.


How does a mounted secret (vault) protect against dumping secrets on crash or debugging?

The app still has it. It can dump it. It will dump it. Django for example (not a security best practice in itself, btw) will indeed dump ENV vars but will also dump its settings.

The solution to this problem lies not in how you get the secrets into the app, but in prohibiting them getting out of it. E.g. builds removing/stubbing tracing, dumping entirely. Or with proper logging and tracing layers that filter stuff.

There really is no difference, security wise, between logger.debug(system.env) and logger.debug(app.conf)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: