It does if you make the card self destruct if you don't write "signed, ricardobeat" on it. Courts have been over this in the 1990s with Nintendo. The Gameboy wouldn't boot any game that didn't start with "signed, Nintendo" so game companies just put that there and it wasn't illegal.
(Later, a trick was found to replace the signature and still boot, but it required extra chips in the game cartridge)
That is not the case, is it? You only need to spoof the BambuStudio client in order to use their cloud infrastructure. Sending prints over LAN is still possible without it.
- "It is more convenient" is not a strong enough argument there, that's kind of the point of a commercial venture.
- Yes, they could be nicer about it. They aren't. That doesn't make this any more legal or acceptable.
Everything "wants to" run solely in RAM, but we don't have infinite RAM, so a "production grade" database should also be able to fetch data from disk unless this is an explicit tradeoff. MariaDB and PostgreSQL do not require all indices to be stored in RAM. Obviously they can be accessed more quickly if they are in RAM but they are designed under the assumption they will often be stored on disk. It sounds like MongoDB is not, and given the reputation of MongoDB, this is as likely to be incompetence as it is to be a willing tradeoff.
Every serious database that is designed to handle moderate to high traffic, will expect you to have RAM to fit all data and indices. Relational DBs do a solid job if that's not the case, but that also sabotages the efficiency you could get from them. It will work for some time. If it's enough for your, that's fine.
I am not experienced with MongoDB, I don't know if previous comment reports were the users fault or MongoDB's. But one thing is clear to me, complaining it uses too much RAM and not knowing the reasons for it, is a user problem. A common mistake is to setup a DB and expect it just magically does works. DBs are complicated beasts, you have to know how to deal with them.
You certainly don't need to hold all data in RAM to serve "moderate" traffic. A modern hard drive can seek about 80 times per second, an optimized RAID array even more, and an SSD tens of thousands, and if we're pessimistic, it takes 10 seeks to service a request. To me a light load means up to about a request every second, a moderate load means maybe 20 requests per second and a heavy load means hundreds or thousands of requests per second. Pessimistically each (read) request takes 5-10 random reads to service and almost every system is read-mostly.
I think these are realistic expectations for most apps. Obviously the likes of Netflix and Uber get orders of magnitude more, but 99.9% of apps aren't a Netflix or an Uber, and you don't have to optimize for scaling until your app is on a trajectory to become one, and putting your database on an SSD already let's you handle several thousand concurrent users with ease.
RDBMS are typically pretty good keeping the frequently requested data in RAM. This disguises the latency of disk access and performance will heavily depend on access patterns. If you serve 1TB of data from a DB with 8GB of RAM and that is sufficient for your use cases, I wont stop you. If you expect low, predictable latency (<1ms) even on a 98/2 r/w system, then it it's not worth the headache.
Of course everything depends on use case and constraints. I highlight the extremes here, the initial confusion was why DBs require so much RAM. Traditional DBs are optimized around RAM, that's where they perform best. You can abuse that, but it's not the best they can be in terms of latency, predictability and stability.
Potentially a mix of both, though MongoDB was still very young when we were using it. Places like Google were championing it, or rather places that can afford to burn a ton of RAM.
Most countries currently have laws that openly require telecommunications providers, but not messaging apps, to do lawful intercept. This isn't hidden.
Most spy agencies find having to get a warrant from a judge for each target too cumbersome, so they tap into fiber cables and do unlawful intercept as well.
Most countries currently have laws that openly require telecommunications providers, but not messaging apps, to do lawful intercept. This isn't hidden.
That is the official legal implementation but not really how it works. Large tech companies are incentivized to cooperate with governments and will do so without hesitation. Governments can make it nearly impossible to operate a business. Cooperation clears that hurdle. This is not some theory but directly from the horses mouths. I have spoken to all the executives at every company I have worked for and this is the general consensus in large companies.
Hacker News does not prevent mass hysteria and ragebaiting. It seems like for any social media side, the appearance of preventing negative behaviors is worth far more than actually preventing negative behaviors, which can actually subtract stakeholder value in many cases.
I want some kind of algorithm though. If some of my friends post a lot and some post a little, I want to see a more even split. And I want to see some posts from friends of friends, and from strangers who are posting similarly to my friends.
I honestly don't think it's possible for platforms to have "nice" algorithms like this without slowly slipping into the "maximum-engagement" algorithms we're plagued with now. I remember seeing this happen with Instagram, slowly going from a chronological feed to a confusing one where you can never be certain you've caught up with your network.
In a perfect world it would be great to have a platform that allows open-sourced algorithms for people to choose from, although that's a crazy pipe dream.
I get that question a lot and I can understand it, so there are two things here at work, one is freedom of speech. Depending on your local rules (I am from Germany so our rules i.e. concerning the Nazi past are different than the US rules) freedom of speech should be guaranteed, and opinions shouldn't be labeled as desinformation or manipulation based on their content.
What we are monitoring are deceptive patterns on a text or transcript level. Deceptive patterns can be things like information inconsistency in one post, context shifts in one post that are used to reframe something, or video patterns like fake statistics or fake headlines that are not consistent with the main content.
All of these patterns are actual science backed psychological manipulation patterns and they are consistently used in the most viral posts we detect. My perspective after one year working on this is that the average media literacy is even lower than we think and that we build an evolutionary system with the social media platforms that is optimized to increase the performance of digital manipulation actors.
reply