Hacker Newsnew | past | comments | ask | show | jobs | submit | scrpl's commentslogin

Another reason to keep foundational protocols small. HTTP/2 has been around for more than a decade (including SPDY), and this is a first time this attack type surfaced. I wonder what surprises HTTP/3 and QUIC hide...


DNS is a small protocol and is abused by DDoS actors worldwide for relay attacks.


DNS is from 1983, give it some slack


The point I'm trying to make is that "small" protocols aren't less likely to be DDoS vectors.

Avoiding designing in DDoS relay/amplication vectors requires luck or intention, not just making the protocol small.


Small, less complex protocols are inherently less likely to be insecure all things being equal, simply due to reduced attack surface.

DNS was created for a different environment, at a time when security wasn't at forefront so it's not a good example of the opposite.


This is such a strong claim I'd really appreciate something other than "smaller is better"

Abuse and abuse vectors vary wildly in complexity, some complexity is certainly required exactly to avoid dumb bottlenecks if not vulnerabilities. So based on what are you saying something simple will inherently resist abuse better?


> Small, less complex protocols are inherently less likely to be insecure all things being equal, simply due to reduced attack surface.

That feels intuitive in the "less code is less bugs is less security issues" sense but implies that "secure" and "can't be abused" are the same thing.

Related? Sure. Same? No.

Oddly enough, we probably could have prevented the replay/amplification dos attacks that use DNS by making DNS more complex / adding mutual authentication so it's not possible for A to request something that is then sent to B.


We could have prevented the replay/amplification dos attacks that use DNS by making DNS use TCP.

In practice though the only way to "fix" DNS that would've worked in the 80s would've probably been to require the request be padded to larger than the response...


But TCP is way more complex


... yeah? I know? "In practice though the only way to "fix" DNS that would've worked in the 80s would've probably been to require the request be padded to larger than the response..."

It's not as complex as some "mutual authentication" scheme though lmao


I'm also from 1983 and I haven't been DDoSed


DNS is an enormous protocol, almost unmeasurably large.


That's a bit overblown. There's a lot there and some of it conflicts with itself but it's not unmeasurably large by any means. It's a knowable protocol (and yes, I'm aware of the camel meme[1]).

1. https://powerdns.org/dns-camel/


Quiz: which RFCs do you need to know and implement to implement DNS?


QUIC didn't account for amplification attacks in its design and the people complaining about it were initially dismissed.


HTTP/2 is pretty small.


Great solution for a world without shared and dynamic ips.


Not as bad as one may think. It's proper feedback which can be acted upon.

Every reasonable connectivity provider would pay attention to this info, or face intense complaints from its users with shared and dynamic IPs. It would identify sources of attacks, and block them at higher granularity level, reporting that the range has been cleared. (If a provider lied, everyone would stop believing it, and the disgruntled customers would leave it.)

For shared hosting providers it would mean blocking specific user accounts using a firewall, notifying users, and maybe even selling cleanup services.

For home internet users, it also would mean blocking specific users, contacting them, helping them identify the infected machine at home.

It would massively drive patching of old router firmware which is often cracked and infected. Same for IoT stuff, infected PCs, malicious apps on phones, etc. There would be an incentive to stay clean.


If the one doing the blocking is not at FAANG it would do nothing of sorts. And FAANG benefit from DDoS by getting people into their walled cloud gardens.


Funny man, thinks big ISP cares you yourself blocked your own site from your own customers coming from the big ISP network.


No; with a shared hosting, somebody else manages to blacklist the IP that serves many paying customers.


Block the whole subnet and make it the ISP's problem?


It's interesting to me that most of the push-back so far has been for the business model of the internet, ie people need link traversal and content publishing in order to make money from advertising (implied, but not stated). Therefore we need to add yet another layer to the mix, the cloud providers, and start paying those guys.

And yes, we can block entire subnets. You own the IP addresses, you're responsible for stuff coming out of them, at least to the degree that it's not maliscious to the web as a whole. (but not the content itself, of course)

I'm calling bullshit on these assumptions. The internet is a communications tool. If it's not communicating, it's broken. If you provide dynamic IPs to clients that attack people, you're breaking it. It's not my problem or something I should ever be expected to pay for.

To be clear, my point is that we're suggesting yet another layer of commercial, paid crap on top of a broken system in order to fix it. It'd be phenomenally better just to publicly identify place and methods where it's broken and let other folks with more vested interests than information consumers worry about it. Hell, I'm not interested in paying for the current busload of bytes I'm currently consuming for every one sentence of value I receive.


Because when a single machine is infected, at one ISP, it's a good idea to block the whole subnet? I don't think any commercial activity could afford such security strategy, blindly blocking legit users by thousands.


So it’s the ISPs fault that my grandma never met a spam email that she didn’t want to click?

One of the things that gets lost in this kind of debate is that the vast, vast majority of Internet users are not experts in how the Internet, computers, or their phones work. So expecting them to be able to "just not get exploited" is a naive strategy and bringing the pain to the ISP feels counterproductive because what, realistically, can they do to stop all of their unsophisticated users from getting themselves exploited?

At the end of the day, the vast majority of the users of the Internet do not care how it works - they want their email, they want their cat videos, and they want to check up on their high school ex on Facebook. How can we rearchitect the Internet to be a) open b) privacy protecting, and c) robust against these kinds of attacks so that the targets of DDOS attacks have better protection than paying a third party and hoping that that third party can protect them?


How does the ISP solve it? Send a mass mail/email telling people to reset their devices because someone has a device with botnet malware?


That is their problem. Maybe the price needs to go up if you don't secure all your devices as the ISP is going to send a tech to your house. Or maybe the ISP has deep enough pockets to find a sue those cheap IOT device makers for not being secure thus funding their tech support team.


Egress filtering? A botnet DDOS stream should not look like normal network traffic...


> Sorry citizen, google services are inaccessible because the only ISP in your city sold a service to a bad actor.

> We might fix this, we might not, you DONT have a choice.

> Thank you for your continued business.


Indistinguishable from the kind of service I get from Google - the moment that I need a human involved I just close my account with whatever Google service is misbehaving and move on.


But you have other options which is my point.

(swap in any corpo-service provider you personally like the most)

Blanket banning subnet ranges from services because of the actions of someone else is 3rd world shit.


Hacker News nerds will argue all day long that the Internet is a utility when the argument happens to personally benefit them, then in the same breath say that a random network admin is justified in blocking a whole ISP subnet due to one “bad” actor. And of course by bad actor I mean person that almost certainly accidentally got themselves infected with malware by not understanding the completely Byzantine world of computers and the Internet.


Well, if someone had somehow gotten their house wires damaged in a way that causes brownouts to neighbours, wouldn't the electric company be justified in cutting off the house?


I‘m sure comcast is terrified that their users won’t be able to read my blog.


You are quite obviously speaking from the perspective as someone that wouldn’t be in a position to be making these calls.


Those movements are extremely animal-like, to the point of being unsettling (despite that I want a robot like that for myself)


And embedded systems (cars and stuff)...


That's just sad. Open Source support really needs to be normalized in corporate environment. Now it's more of an exception than the rule.


I agree, but in this particular case i have to ask... how many companies are actually USING Krita? My impression is that the vast majority of places that need software like that use Adobe Photoshop/Illustrator, or Affinity Photo/Designer.


That's exactly the problem.

Not only that they use privative products - it's that people think about Krita as an alternative to Photoshop, as Krita is intended for digital painting rather than general raster image manipulation. Hence narrowing the target of Krita to a much smaller audience.


To an extent, it is, since a lot of artists use PS to draw/paint. For those people, Krita is indeed an alternative to PS.


Probably not many if you don't count small individual art studios - the mobile gacha game industry(and anime animation to some extent) don't standardize art styles and pipeline art production as done in American movie and comic industries, but relies on intimate collaborations with external, individual artists for creative components.

So they mostly only import (Krita-exported) PSD into Ps, or even if Krita was used professionally on the floor by employed artists, choice of tools would be up to artist's discretion and might not become a corporate talking point in the way, say, what Maya or Lightwave debate would be.

Maybe OnlyFans/Patreon could throw a million or two for couple years...? But Krita is not the first choice across the board, and creators on those platform don't seem too concerned with CSP/Procreate subscriptions, so that might be a difficult path too?


I see it used by concept-art artists in gamedev studios from time to time.


> how many companies are actually USING Krita?

Assuming a product that advertises its lack of corporate support won’t be super welcoming of commercial users.


In a corporate setting, it will help if open source software has easy deployment configurations to track usage and ensure vulnerable versions are not lurking somewhere. Firefox for instance has this.


> Open Source support really needs to be normalized in corporate environment

It won't. The only real workaround right now is to simultaneously launch SaaS alongside the FOSS project and monetize that heavily.


> It won't. The only real workaround right now is to simultaneously launch SaaS alongside the FOSS project and monetize that heavily.

It can work. Paying for software is already a normal part of doing business, make this work to your advantage. For example:

- In the budgeting process just add a line item for the FOSS software you're using and put a number on it that's lower than the proprietary alternative. - If you're already using the software (like Krita in this case), tell whoever is in charge of the purse strings how much time, effort, and money the software has saved the company and ask them to make a one-off or recurring payment to the project that's lower than the alternative. You'll be surprised how often they say yes (as long as they can get a receipt)


When corporate support rolls in, project tend to turn sour.

Demand from the corporate entities proceed to dominate rather than voices from individual donations. Its a double edged sword.


That's because many corporate "donations" are not so much a donation as a way of soft-buying a feature.

It's hard for businesses hyperfocused on short-term gains to understand a long-term value of, for example, supporting an alternative for an industry-dominating Adobe toolkit. But the value is there.


Long-term value that's hard to define doesn't translate well to stock price especially when any investment also helps competitors who aren't investing anything into the project.


And the examples are? Cause the counter examples are phenomenal like Blender and Linux.


Khara threw a bag of pachinko money in Blender's face to make the last Evangelion film work, and it was fine. I guess that was a rare occurrence that they desperately and so purely needed a tool they can hand to broke freelancers without frantically searching for keygens, but it can totally happen when incentives are right.


Krita is accepting corporate donations.


You don't have control because browser might not support of http3 at all. It's up to browser developers to decide when their support levels are mature enough to use by default. There's no other way of doing it.


Few years ago I wrote an article on HTTP/3 that was briefly featured on HN as well: https://news.ycombinator.com/item?id=24834767

I can't agree with author about it eating the world though. It seems like only internet giants can afford implementing and supporting protocol this complex, and they're the only ones who will get a measurable benefit from it. It is an upgrade for sure, but an expensive one.


It's not all bad, you can start using Caddy for your new project and serve http3 right now.



This is insane! 11 hours or not, I didn't expect SD could ever run on hardware like Pi Zero.


I write about my experiences in tech and stuff I learned https://scorpil.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: