Another reason to keep foundational protocols small. HTTP/2 has been around for more than a decade (including SPDY), and this is a first time this attack type surfaced. I wonder what surprises HTTP/3 and QUIC hide...
This is such a strong claim I'd really appreciate something other than "smaller is better"
Abuse and abuse vectors vary wildly in complexity, some complexity is certainly required exactly to avoid dumb bottlenecks if not vulnerabilities. So based on what are you saying something simple will inherently resist abuse better?
> Small, less complex protocols are inherently less likely to be insecure all things being equal, simply due to reduced attack surface.
That feels intuitive in the "less code is less bugs is less security issues" sense but implies that "secure" and "can't be abused" are the same thing.
Related? Sure. Same? No.
Oddly enough, we probably could have prevented the replay/amplification dos attacks that use DNS by making DNS more complex / adding mutual authentication so it's not possible for A to request something that is then sent to B.
We could have prevented the replay/amplification dos attacks that use DNS by making DNS use TCP.
In practice though the only way to "fix" DNS that would've worked in the 80s would've probably been to require the request be padded to larger than the response...
... yeah? I know? "In practice though the only way to "fix" DNS that would've worked in the 80s would've probably been to require the request be padded to larger than the response..."
It's not as complex as some "mutual authentication" scheme though lmao
That's a bit overblown. There's a lot there and some of it conflicts with itself but it's not unmeasurably large by any means. It's a knowable protocol (and yes, I'm aware of the camel meme[1]).
Not as bad as one may think. It's proper feedback which can be acted upon.
Every reasonable connectivity provider would pay attention to this info, or face intense complaints from its users with shared and dynamic IPs. It would identify sources of attacks, and block them at higher granularity level, reporting that the range has been cleared. (If a provider lied, everyone would stop believing it, and the disgruntled customers would leave it.)
For shared hosting providers it would mean blocking specific user accounts using a firewall, notifying users, and maybe even selling cleanup services.
For home internet users, it also would mean blocking specific users, contacting them, helping them identify the infected machine at home.
It would massively drive patching of old router firmware which is often cracked and infected. Same for IoT stuff, infected PCs, malicious apps on phones, etc. There would be an incentive to stay clean.
If the one doing the blocking is not at FAANG it would do nothing of sorts. And FAANG benefit from DDoS by getting people into their walled cloud gardens.
It's interesting to me that most of the push-back so far has been for the business model of the internet, ie people need link traversal and content publishing in order to make money from advertising (implied, but not stated). Therefore we need to add yet another layer to the mix, the cloud providers, and start paying those guys.
And yes, we can block entire subnets. You own the IP addresses, you're responsible for stuff coming out of them, at least to the degree that it's not maliscious to the web as a whole. (but not the content itself, of course)
I'm calling bullshit on these assumptions. The internet is a communications tool. If it's not communicating, it's broken. If you provide dynamic IPs to clients that attack people, you're breaking it. It's not my problem or something I should ever be expected to pay for.
To be clear, my point is that we're suggesting yet another layer of commercial, paid crap on top of a broken system in order to fix it. It'd be phenomenally better just to publicly identify place and methods where it's broken and let other folks with more vested interests than information consumers worry about it. Hell, I'm not interested in paying for the current busload of bytes I'm currently consuming for every one sentence of value I receive.
Because when a single machine is infected, at one ISP, it's a good idea to block the whole subnet? I don't think any commercial activity could afford such security strategy, blindly blocking legit users by thousands.
So it’s the ISPs fault that my grandma never met a spam email that she didn’t want to click?
One of the things that gets lost in this kind of debate is that the vast, vast majority of Internet users are not experts in how the Internet, computers, or their phones work. So expecting them to be able to "just not get exploited" is a naive strategy and bringing the pain to the ISP feels counterproductive because what, realistically, can they do to stop all of their unsophisticated users from getting themselves exploited?
At the end of the day, the vast majority of the users of the Internet do not care how it works - they want their email, they want their cat videos, and they want to check up on their high school ex on Facebook. How can we rearchitect the Internet to be a) open b) privacy protecting, and c) robust against these kinds of attacks so that the targets of DDOS attacks have better protection than paying a third party and hoping that that third party can protect them?
That is their problem. Maybe the price needs to go up if you don't secure all your devices as the ISP is going to send a tech to your house. Or maybe the ISP has deep enough pockets to find a sue those cheap IOT device makers for not being secure thus funding their tech support team.
Indistinguishable from the kind of service I get from Google - the moment that I need a human involved I just close my account with whatever Google service is misbehaving and move on.
Hacker News nerds will argue all day long that the Internet is a utility when the argument happens to personally benefit them, then in the same breath say that a random network admin is justified in blocking a whole ISP subnet due to one “bad” actor. And of course by bad actor I mean person that almost certainly accidentally got themselves infected with malware by not understanding the completely Byzantine world of computers and the Internet.
Well, if someone had somehow gotten their house wires damaged in a way that causes brownouts to neighbours, wouldn't the electric company be justified in cutting off the house?
I agree, but in this particular case i have to ask... how many companies are actually USING Krita? My impression is that the vast majority of places that need software like that use Adobe Photoshop/Illustrator, or Affinity Photo/Designer.
Not only that they use privative products - it's that people think about Krita as an alternative to Photoshop, as Krita is intended for digital painting rather than general raster image manipulation. Hence narrowing the target of Krita to a much smaller audience.
Probably not many if you don't count small individual art studios - the mobile gacha game industry(and anime animation to some extent) don't standardize art styles and pipeline art production as done in American movie and comic industries, but relies on intimate collaborations with external, individual artists for creative components.
So they mostly only import (Krita-exported) PSD into Ps, or even if Krita was used professionally on the floor by employed artists, choice of tools would be up to artist's discretion and might not become a corporate talking point in the way, say, what Maya or Lightwave debate would be.
Maybe OnlyFans/Patreon could throw a million or two for couple years...? But Krita is not the first choice across the board, and creators on those platform don't seem too concerned with CSP/Procreate subscriptions, so that might be a difficult path too?
In a corporate setting, it will help if open source software has easy deployment configurations to track usage and ensure vulnerable versions are not lurking somewhere. Firefox for instance has this.
> It won't. The only real workaround right now is to simultaneously launch SaaS alongside the FOSS project and monetize that heavily.
It can work. Paying for software is already a normal part of doing business, make this work to your advantage. For example:
- In the budgeting process just add a line item for the FOSS software you're using and put a number on it that's lower than the proprietary alternative.
- If you're already using the software (like Krita in this case), tell whoever is in charge of the purse strings how much time, effort, and money the software has saved the company and ask them to make a one-off or recurring payment to the project that's lower than the alternative. You'll be surprised how often they say yes (as long as they can get a receipt)
That's because many corporate "donations" are not so much a donation as a way of soft-buying a feature.
It's hard for businesses hyperfocused on short-term gains to understand a long-term value of, for example, supporting an alternative for an industry-dominating Adobe toolkit. But the value is there.
Long-term value that's hard to define doesn't translate well to stock price especially when any investment also helps competitors who aren't investing anything into the project.
Khara threw a bag of pachinko money in Blender's face to make the last Evangelion film work, and it was fine. I guess that was a rare occurrence that they desperately and so purely needed a tool they can hand to broke freelancers without frantically searching for keygens, but it can totally happen when incentives are right.
You don't have control because browser might not support of http3 at all. It's up to browser developers to decide when their support levels are mature enough to use by default. There's no other way of doing it.
I can't agree with author about it eating the world though. It seems like only internet giants can afford implementing and supporting protocol this complex, and they're the only ones who will get a measurable benefit from it. It is an upgrade for sure, but an expensive one.