Github is claiming that a usage spike in 2026 is the cause of availability issues in 2025, so their explanation is clearly incomplete at best. The usage spike may be why things have failed to get better despite them putting effort into improving things, but it isn't the root cause of problems.
But the outages have been getting worse and worse even before anything related to AI took off.
The issue is that they're not a scrappy startup anymore, they are defacto running the internets development infrastructure and are owned by a trillion dollar company.
So the bar they're measured by has changed and they haven't even tried to keep up, paying lip service to reliability when you are critical infrastructure is not going to go well.
There were reliability issues in 2010 for sure, but it feels worse now; the period before acquisition was the most stable (2014-2017).
Funny how windows updates are never postponed for lack of "scaling". I know, I know, completely different stuff here - but arent test vms and ci vms being updated constantly?
Im old enough to remember the hotmail migration to win2k (then 2k3) and the postmortem. I was also old enough to look at the rotor source code. Yah, that one, running managed code in freebsd.
Their own greed is causing their issues. They could be doing a million different things to reduce demand, but they don't want to dampen their current growth and have opted to continue scaling up at the cost of quality.
Coupled with this (unsubstantiated but thorough) discussion on the internals of Azure, if even a fraction of this below-linked post is true, Github's abnormally-filesystem-intensive workflows would have wildly unpredictable performance and reliability forced onto Azure.
Azure also regularly has incidents due to capacity issues in several regions, so that many Azure-managed services also go down. Some of those incidents have been open continuously for many months now.
I think it doesn't need to be a large X% increase, just needs to hit some critical infra threshold where various services start failing and cascade. Weakest link and everything.
It's been on a downward trend before agentic coding took over. I suspect it's a mix of Microsoft culture and Microsoft infrastructure. It's starting to feel about the same quality as other Microsoft services.
Short aside, I have to rehost dotnet CLI binaries because their hosting infrastructure is so unreliable that it was causing CI failures regularly.
It began pretty much immediately after the acquisition. There was an uptime chart making the rounds a while back, and less than a year in, the all green data points of pre-Microsoft Github turned to lots of red. I assume brain drain, as everyone vested or otherwise completed their contractual requirements and cashed out. And, Microsoft has never had a great reliability culture in their cloud services, so no in-house talent to effectively take over.
Incomplete pull request results in repositoriesSubscribe
Update - We are actively reindexing the remaining ElasticSearch indexes. Our priority is ensuring correctness and avoiding further impact. We are taking a measured approach to safely backfill data and will share additional updates as progress continues.
Apr 28, 2026 - 15:58 UTC
Update - After yesterday’s incident, we are investigating cases where /pulls and /repo/pulls pages are not showing all indexed pull requests. This is because our Elasticsearch cluster does not currently contain all indexed documents.
No pull request data has been lost. As pull requests are updated, they will be reindexed. We are also working on accelerating a full reindex so these pages return complete results again.
Apr 28, 2026 - 14:51 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Apr 28, 2026 - 14:17 UTC
GitHub took a massive hit in credibility when it got bought by Microsoft. We are a burned generation, we have seen the worst of Microsoft. This created a massive crack in the foundation of trust for most people.
Then Copilot happened. Some people dug how the training is done, and one GitHub employee responded by mail that every public repository including GPL repositories are included (the relevant Tweets are deleted unfortunately). The created crack has deepened. Some of us (incl. me) left GitHub.
As Copilot entrenched, Microsoft's product development practices and philosophy took over, and vibe coding started to be used by hordes of developers, GitHub's code foundations started to crumble. Add the big migrations they're doing & regressions they are causing on the UI now, and we're here.
GitHub's first enshittification cycle is over. Now we're starting the second cycle. The bloated, slow, entrenched hegemon's decay from relevance phase.
It'll be a slow decay. It won't fall in a day, but they golden era is long gone.
Any more context on the copilot training note? More pointers would be very interesting, but we'd need to keep in mind how many different underlying models were (are?) branded as copilot. I thought at some points the "copilot" model in autocomplete contexts was a finetuned GPT from OAI.
Re: GPL, there are other open access datasets of git repos that make some distinctions between copyleft licenses but those are older resources now.
Please see below. This is from the OG, "first generation" Copilot, from 2022. If I can find any more from my dusty trove, I'll edit or reply to this very comment. I can't do more digging now, because I'm in a pinch.
> Re: GPL, there are other open access datasets of git repos that make some distinctions between copyleft licenses but those are older resources now.
Arguably "The Stack" contains only permissively licensed code, but there are two repositories of mine inside it. One is a very simple logging library, without any license (which implies "All Rights Reserved"), and another is a fork of LightDM which I worked on, which is GPL licensed.
So any "permissively licensed" dataset probably contains at least one copylefted or strong copyrighted codebase, making them highly suspicious.
== EDIT ==
Found some. Kagi's date-constrained search to the rescue.
#2 makes #1 a big problem. AI-generated code is fine if you have thorough engineering practices around it. Are they blindly merging in AI generated code without review? Maybe. Thats an issue of engineering practices, not of the use of generative AI in general.
Well, it started just after the Microsoft acquisition, when AI did not exist, and coincided with news of Microsoft fully ingesting the GitHub team and forcing architecture and priority changes, and has steadily continued since. So idk, it’s a mystery. Maybe it was caused by the thing that did not exist when it happened. Microsoft just posted on a PR blog that it’s the thing that did not exist, and they’re famously truthful, open, and altruistic.
If you've worked with Azure you don't need to be explained what the problem is. I'll believe that workloads are different now than they were some years ago but... they literally are the cause for it so no sympathy from me there.
Azure is not the best, but it mostly works. GitHub gets only 98% reliability for git operation component, reading and committing. This is the most basic component. The fact they are not on this 24/7 and it isn't fixed is the result of a culture (=what is prioritized, what quality is accepted).
I often see people frame music as mathematical manipulation or try to approach music making from a “first principles” approach, where those principles are mathematics and physics. But watching musicians talk about making music, I seldom see any discussion of the underlying math, and instead see discussions of timbres, instruments, and stylistic/historical influences; musicians who make good music seems to believe “first principles” involves historical knowledge and a well-listened ear, and nothing involving math. My question is: Is thinking about music as applied mathematics a good way to create good music? Or is it just the most easily digestible model of music for the crowd on this site?
> Is thinking about music as applied mathematics a good way to create good music?
As an instruction, I think clearly not, the fact that lots of musicians aren't mathematical at all but create great music seems to prove it to me.
But it is interesting to think about musicians who do seem to think about music this way. Bach is definitely a good example where the system of counterpoint is very complex. I'm not sure if she'd describe herself in these terns, but I've always got the impression Laurie Speigel thinks about music a little like that too. Then there's stuff like Coltrane's Giant Steps, where the whole piece is based around a sort of music theory "trick".
So maybe not generally, but there's definitely some awesome music out of that kind of relationship.
Maths and physics are a terrible way to learn the artistic side of music, but if you are interested in "why does a fifth chord sound nice" or "why are the black and white keys on a piano in that particular pattern" you can get interesting (partial) answers by looking into the maths of frequency ratios and the physics of overtones and how they affect the cilia of the inner ear. Music differs between cultures but there are some universals such as the Octave (edit: by which I mean doubling of frequency, not how its divided up) and nearly all cultures have some form of music ... There is something universally human about it, and so its a doorway to studying how our minds work.
> but there are some universals such as the Octave
Universal in the sense that a number of rocks or a number of sheep can be doubled just as a frequency can?
The notion that there are 8 sub divisions to a doubled frequency interval isn't universal. Balinese Gamelan doesn't even neccessarily have an agreed number of "notes" in an "Octave" from one village to the next.
There aren't 8 subdivisions in an octave in western music either. Well, there are in any given scale, but there are also many scales. "Octave" is a misleading term. Given that it's just a doubling of frequency, the term is sort of as good as any other, and that douibling exists in pretty much all cultures that have developed string, pipe or other resonant body based music (including hitting hollow logs and plucking vibrating reeds / sticks / tines).
It's pretty much the foundational idea of any modality. No matter how you divide it up, the purest harmony is doubling or halving.
The commenter presumably was talking about octave equivalence, which is reportedly present across all or nearly all historical musical cultures that we know about. It’s also supposedly present in some other mammals.
reportedly present, yes .. but the debate is still hot on universal.
I was asking to tease out some PoV perspective, again Gamelan doesn't neccessarily have powers of two, or 12, etc divisions of a doubling (or Octave, if we're using that term); it's a non western style of percussion that has a suprising number of local variations (it's essentially near unique to Balinese culture) in divisions and tunings.
The Octave wikipedia entry includes:
Octave equivalence is a part of most musical cultures, but is far from universal in "primitive" and early music
Yeah, that sentence on Wikipedia is a bit unclear though. It might be merely claiming that, for some musical cultures, we don’t have a written record of an explicit notion of octave equivalence or tone name circularity.
But I suspect there’s a clear biological mechanism which makes it easy to mistake one octave for another from any source of roughly harmonic sound. This is due to the similarity in the overtones of two harmonic sounds that differ by an octave. I would be surprised if this mechanism isn’t universal, although its on various musical systems can obviously vary a lot.
Universal in the sense that a number of rocks or a number of sheep can be doubled just as a frequency can?
Yes thats what I meant, the doubling of frequency. It might seem trivial but the fact that doubling frequency sounds "right" to humans is actually quite interesting. Why does it sound "right"?
Interference is most of the answer. With frequencies f and 2f you get the smoothest interference patterns, even if the tones have a lot of harmonics. This applies reducingly to increasingly fractional ratios.
> So yes, the 12-tone scale is a universal thing -
I don't follow the logic here though. It's certainly true that a 12-tone / Chromatic scale is ubiquitous within the Western Music tradition .. but the universe is reportedly a little larger.
Even Western Music includes exceptions like the 9-note augmented scale, though the argument can be made that it's a 12-scale with 3 bits "missing" - not a case that can be made about a non-western 7 note percussive scale.
All scales in all cultures are based on octaves and fifths. (E.g., the ancient Chinese musical scale also has 12 tones.)
Also the so-called "Western music" standardized on 12 tones very late in the process, long after the Chinese figured it out.
> a 12-scale with 3 bits "missing"
That's all scales, even the "non-Western" ones. Microtonality is added to the standard 12 tone to add tone effects. (Synthesizers in pop music do the same trick.)
To confirm the claim that "all scales in all cultures are based on octaves and fifths" one might study the scales.zip scale files and find those that do not contain octaves and fifths, which should naturally be zero if the claim is true.
Note also that certain musical traditions were suppressed or eradicated due to their unfortunate habit of using dissonant notes such as minor seconds, as opposed to the consonant traids favored by a particular group recently in power around the world. Happy Easter!
Thank you, I am somewhat aware of the knobs present on a synth, though fail to see the relevance given that various other instruments do not have dynamic retuning options. Which 12 tone scale did you have in mind (for there are many) and why do you think 12 (for there are many other numbers, some of which are used by various scale systems) is a natural property of the world? Perhaps with a more cogent argument you could make a better case for your opinion.
129.74 is not really close to a power of two. 31-tet scales have a better approximation of a 5th (and an impressively better approximation of a minor 7th).
The obvious exception in the western system would be the blues scale, which arguably has 9 tones (7 equal tempered notes, plus a just tempered 3rd and 7th).
And Indian ragas break all of these rules. They have scales that don't have 8 notes, scales that don't use equal temperament, and even a few scales that don't repeat on octaves.
Equal temperament is a different issue. The 12 natural tones are necessarily approximations and can't be represented exactly due to the 1 percent difference.
As another commenter below has said, "mathematics might be a useful way to understand music", but it's not how compelling music is made.
Mathematics are fundamental to scales and the harmonic series, and knowing about them will help you refine certain choices, but it's not going to help you write a dramatic melody or an emotionally resonant chord progression, or play an energizing rhythm, even if there are mathematical explanations sometimes.
Good music comes from being a good listener, having a strong sense of what's possible, where it could go, and then delivering something surprising. Telling a story with your melody and supporting the arc of that gesture with harmony that accentuates or contrasts it.
Again, there's a mathematical explanation for harmony and dissonance, but players aren't thinking that granular. They're operating at a higher level of abstraction one, two, or three levels above that: They're thinking about telling a story, evoking an emotion, and exciting an audience in the moment.
Music is more like language. (This is profoundly true). There is very little that's mathematical about music; and even the bit that is, isn't really (since nobody actually uses just temperament).
Digital sound production, however. Yes. There's all kinds of thoroughly unpleasant mathematics, none of which you actually need to know unless you're writing computer music software.
(I write computer music software, and I am also a jazz musician).
It sounds pedantic, but I think it's important: maths and physics are often used to describe sounds, their relationships and emergent properties through combination. Maths and physics aren't ever really used to describe music.
It's like telling someone they can paint a masterpiece because they understand Fe4[Fe(CN)6]3 makes an aesthetically pleasant blue pigment.
> Is thinking about music as applied mathematics a good way to create good music? Or is it just the most easily digestible model of music for the crowd on this site?
It's a great way to analyse music (e.g. to categorise, understand, and communicate detail), but that does not mean it's a good way to create it. There's a lot of beauty in finding those abstractions and I think that representation appeals to a lot of people here.
Discussions about timbre, instrumentation, and stylistic influence are often symmetric to those about math. When you have 90 minutes to spare, highly recommend strapping in for a listen to https://malwebb.com/notnoi.html.
There's a lot of really incredible musicians, composers, producers, and educators that go deep on the math. There's also plenty that don't. People build mental models in different ways. That's a good thing and a big part of what makes most art interesting.
Wonderful question. I suspect it's partially the culture issue you point to, but also a practical issue of composition. If we decompose sound into the basic waveforms, similar to the subject pdf on page 18, we then have parts that we can reassemble. We can take the defense-funded DSP math of the likes of a John Cooley or a John Tukey and build an engine for assembling the parts of sound.
All this being said, I think that's a process of convenience and a historical path not a absolute constraint. We have some more flexible means of communicating with the machines today. And I strongly encourage someone to work on a new UI for computer music. "Jazz trio piano, upright bass, and drums. start drummer laid-back, piano blowing over the changes, then piano on top."
Indeed it described an output and was also a UI. I meant that describing the output could be the UI. I pictured a textbox or a whisper-style text to speech session. Basically an ai chatbot specialized for generating music.
I couldn't figure out precisely what that video showed, but it was fascinating.
Somehow it reminded me of the orca music programming environment.
It doesn't take that long to learn to read sheet music (or tabs especially) and you could treat it like just playing a sequence of notes but you're never going to get far that way. You need to understand why certain notes go together. Some people have done that without theory but you're going to get much further with even some basic theory.
Think of it this way: if you first saw the word "HELLO". You could deconstruct that and remember that there are 11 lines and 1 circle but that's not how you learn to read or write. You learn letters, which are collections of lines. So you learn the concept of "H" and it having a sound and that it is 3 lines. You then learn to put them together and how you can sound out something thats's written and with varying degree (depending on language) take something said and write it down.
Music theory is like that. Sheet music may be a bunch of circles and lines on a sheet but really it's describing keys and usually a chord-progression. Some sheet music will explicitly just list the chords at the top like A, Em, Asus4, etc.
The 12 notes are constructed from harmonics, specifically 2:1 and 3:2. This part is maths. But the frequencies are adjusted slightly in a system called "equal temperament" where the ratio of 2 adjacent notes is the 12th root of 2.
From there you generally play a subset of those notes (often 7). That's called a scale (eg major or minor scale). The chords in that scale can then be identified by a Roman numeral within a key. So the I chord in the C major scale is a C. The IV chord is the F. Depending on the starting note of the scale you'll get sharps (#) and flats (♭) to denote that they are a different pitch. An easy way to remember this is that the white keys represent those whole and half steps with just the white keys (starting from C). As an aside, so does the A minor scale.
Why do I say all this? Because a huge amount of modern music is simply a I-IV-V chord progression within whatever scale you're using. So if you know a little theory, you can choose a key and a chord progression that will inherently sound nice together. There's more to it of course but understanding what a key is, what chords are and what a chord progression is is a pretty good start.
Upvoted because it’s a thoughtful question, but honestly I think it’s just that this book and many others like it are addressed primarily to people who are going to use tools like SuperCollider or CSound or raw dsp to create their own entirely original technology stacks for creative work, and an understanding of the physics/math of sound is pretty key to that kind of work, regardless of the musicality of their later creative production.
> My question is: Is thinking about music as applied mathematics a good way to create good music? Or is it just the most easily digestible model of music for the crowd on this site?
You are probably aware that there are these things called synthesizers, which exist both hardware and software, complex pieces of technology that can shape sound. There are people who are specialized in creating them (with code and/or electronics), people who are specialized in programming them (creating presets) and people who excel in using them to make music. And many more different profiles who are in between. Each will care about different aspects, they all contribute to making music.
Life is not black and white, and music neither. What is even "good music"? What is your mental model for "the crowd on this site"? In your questions, aren't you reducing the possibilities of learning by putting these into boxes?
The world is big, life is rich and people are much more diverse than what one typically perceives.
Both of these things. The timbres can be explained through evolutionary biology. The same brain centers used for processing movement in 3d space also start firing up to start predicting where any given piece of music will go.
In my opinion, good music typically comes from the flow state. In other words, less thinking and more practice/exploration leads to the best results.
From personal experience, pattern-recognition is the most useful "applied math" skill when making music. I use it when identifying intervals between notes, and chord progressions, which you need when you're trying to get the idea out of your head and onto the instrument you're playing or song you're writing.
Theory and practice are both important. A few analogues:
- fundamentals on an instrument vs performance
- low-level graphics programming vs using a game engine
- comp sci vs software engineering
The theory side here gets to the root of things, is valid for any sort of DAW/DSP software, and has the benefit of being easier to teach. Practice is obviously more important though, especially in the arts. It's better to grope in the dark than do nothing.
Musicians are already experiencing this. The likes of Suno are churning out high quality songs with only a minimal amount of prompting material.
One can roughly prototype a song, giving it the structure, melody, harmony, rhythm, lyrics that a finished song might have, upload it and request a cover in a particular style. The output will often resemble a highly competent human performance.
I'm no musician, but it sounds like trying to understand human psychology by studying the brain, or cells by studying physics, or the colony by isolating a single ant. Answer: no. It's the idea of emergent phenomena not being reducible to its parts.
I'm a lifelong musician, went to music school to study jazz and orchestration, was a professional film composer for 15 years prior to pivoting to programming. I've read quite a few books on the intersection of math and music.
And not once have I ever felt that these so-called intersections were anything other than contrived.
Of course we can interface with music from a mathematical perspective, but that doesn't mean that we should or that there's anything particularly illuminating to gleen from doing so.
Beyond the very basic math (honestly even that's perhaps too strong a word -- just because something is expressed in numbers doesn't make it _math_) of time signatures and some harmonic concepts up to maybe some of Slonimsky's work, doing so is IMO a fool's errand that exists only to fill space on a TEDx stage.
Microtonal music throws all of that out the window.
Logic only works in the context of definite ontologies. But audio frequencies are continuous, not discrete. It really is all vibes at the end of the day.
Plug for Angine de Poitrine for a contemporary example of music that breaks the rules that define traditional music.
Understanding the soul of music and creativity at a mathematical is something that not that many people are trying to do. But there is an entire world of technology that underpins modern music and sound that is built soundly on math, like digital recording digital signal processing, synthesis, physical modeling, and plenty of other stuff, and this seems to be what the book's focus is.
Sure, there have been plenty of attempts to distill music to a mathematical essence. Certainly the ancient Greeks tried this, and traditional counterpoint resembles math in a number of ways. But at the end of the day, mathematical descriptions of math and music theory more generally are more useful as descriptive tools to help give language to what people are doing musically and to understand why we perceive some things as sounding better than others.
Starting with numbers can be good in some respects, like understanding the circle of fifths or how scales are built out of intervals, how chord progressions and harmony work and how to reharmonize, all of which can be augmented with a solid conceptual understanding. But at the end of the day, your ear and creative spirit are your primary asset when it comes to creating good music. This is why computer-generated music has been so bad up until AI took over. Great for building arpeggiators or backing tracks, but good luck creating a beautiful melody in a purely numerical rule-based system.
> I seldom see any discussion of the underlying math, and instead see discussions of timbres, instruments, and stylistic/historical influences
Music today is utter crap at all levels, this is a verifiable scientific fact.
This is probably why.
Music "theory" was invented as a critical tool (i.e., basically to enable reviewers to describe and evaluate the music of the time), not as a composition tool.
Basically, we're holding it wrong and it's doing us harm.
> Music today is utter crap at all levels, this is a verifiable scientific fact.
No it's not, and it's not a verifiable fact. Unless you have a source?
Rick Beato knocks the sami-ness of 'the charts', but there's more to music than that...take a look at who he interviews.
Bach is considered the greatest musical genius of all time, but he was part of an industry and composing was his day job. Each of those BWV's was written in a couple days. Bach's performers at the time didn't study for years for a single recital, they read the sheet music in an afternoon and then performed the BWV next day.
Beethoven improvised his pieces on the fly and performed them himself. This wasn't considered as something out of the ordinary at the time.
Can you imagine the average conservatory graduate imporovising anything today? Even a pentatonic blues riff?
Clearly we went off the rails bigtime somewhere along the way. The framework we're using to teach and compose music is actively hindering us.
I don't agree. I grew up with piano and had music friends all through my life. Classical music requires a certain level of math ability. Modern musicians scorn this and frankly it shows.
Many classical musicians have only a cursory understanding of mathematics. Many modern musicians are pushing the boundaries. There's a reason there's a genre named "math rock". Also, Jazz probably pushed the maths of music beyond classical music. As a last example, listen to some Meshuggah :-)
Why so rude? You really think Meshuggah is some profound group? Do you have any background in classical music? European screamo bands are just the worst. And personally people I've met from that scene rarely understand even the basics of rhythm. So your comment does not match my experience at all.
I've been playing piano for over 40 years and other instruments almost as long. I'm a massive fan of classical music. If you think metal bands don't understand rhythm you obviously haven't listened to much.
I'm not claiming any band to be profound, but many have complex interplay between multiple rhythms, very non standard chord progressions, often non standard tunings, and even microtonal instrumentation.
You seem to be stuck in a world where everyone from Stockhausen suddenly stopped learning music theory. I'm telling you that you're wrong, and obviously not listening to enough modern music.
Stop being a music snob. You are free to like whatever you want but making glaringly incorrect statements about an entire time period encompassing many genres is going to get push back.
Why so rude? This site is ridiculous. You made statements about two entire periods, the current period and the past. Additionally, I am not a snob and I have listened to a lot. I won't even bother to get into this. My college roommate had thousands of records and forced me to listen to all of them.
While not production ready, I’ve been happily surprised at this functionality when building with it. I love my interpreters to be deterministic, or when random to be explicitly seeded. It makes debugging much easier when I can rerun the same program multiple times and expect identical results.
Interestingly I think things that should not be deterministic should actually forced not to be.
Swift for instance will explicitly make iterating on a dictionary not deterministic (by randomizing the iteration), in order to catch weird bugs early if a client relies (knowingly or not) on the specific order the elements of the dictionary are ordered.
This claim sounds vaguely familiar to me (though the documentation on Dictionary does not state any reason for why the iteration order is unpredictable), though the more common reason for languages to have unstable hash table iteration orders is as a consequence of protection against hash flooding, malicious input causing all keys to hash to the same bucket (because iteration order is dependent on bucket order).
Oh yeah you’re right, apparently the main reason was to avoid hash-flooding attacks[1].
I do seem to remember there was a claim regarding the fact that it also prevented a certain class of errors (that I mentioned earlier), but I cannot find the source again, so it might just be my memory playing tricks on me.
The title made me hope for an article about making software serve democracies better instead of consolidating power and wealth. It's about how datacenter build-out in the EU might be accelerated by loosening regulations. Still interesting but a bit of a bummer.
I guess, on that note, are there writeups or articles on how software/compute might be used to help, rather than hinder, liberal democracies? From someone who increasingly sees the tech industry as a tool for authoritarians.
I love their discussion on currying. Currying is very cool theoretically, but I agree that it really causes some bugs and isn’t used that often. It’s cool that most functional compilers automatically curry my functions and give me partial applications, but Id much rather they enforce all parameters be provided and have to explicitly make partial functions when necessary.
Parents are competing with multi-trillion dollar companies who have invested untold amounts of cash and resources into making their content addictive. When parents try to help their children, it's an uphill battle -- every platform that has kids on it also tends to have porn, or violence, or other things, as these platform generally have disappointingly ineffective moderation. Most parents turn to age verification because it's the only way they can think of to compete with the likes of Meta or ByteDance, but the issue is that these platforms shouldn't have this content to begin with. Platforms should be smaller -- the same site shouldn't be serving both pornography and my school district's announcement page and my friend's travel pictures. Large platforms are turning their unwillingness to moderate into legal and privacy issues, when in fact it should simply be a matter of "These platforms have adult content, and these ones don't". Then, parents can much more easily ban specific platforms and topics. Right now there's no levers to pull or adjust, and parent s have their hands tied. You can't take kids of Instagram or TikTok -- they will lose their friends. I hate the fact that the "keep up with my extended family" platform is the same as the "brainrot and addiction" one. The platforms need to be small enough that parents actually have choices on what to let in and what not to. Until either platforms are broken up via. antitrust or until the burden of moderation is on the company, we're going to keep getting privacy-infringing solutions.
If you support privacy, you should support antitrust, else we're going to be seeing these same bills again and again and again until parents can effectively protect their children.
I think there's a pretty simple explanation for this: It's hard to admit when we're not doing well. It's easy to say that the world is getting worse, that you're worried for the future, but to admit that you personally are having trouble is depressing and a little humiliating. I'm guilty of this -- even when times are really bad for me personally, I try to be optimistic and consider my current misery as a temporary misfortune. It helps to keep moving forwards.
It’s also possible that what affects you personally is actually going well, but what affects everyone indirectly is not going well. Rivers of plastic may be flowing in the ocean, but your local trash collector collects “recyclables” weekly for no additional charge and you feel good about sorting the trash.
A person is also more in control of what's going on around them personally, the larger that scope increases the less any normal individual has any effect. The ant can be optimistic about it's chances of surviving the winter while still pessimistic about what the fate of all of the grasshoppers.
Yup. Long range it looks dire. But things haven't fallen apart *yet*. I don't see why these are supposedly contradictory. The altimeter unwinding at a dizzying pace inflicts no harm on the occupants. But it's an awful lot easier to say "this time it's different" than admit what it says.
This framing seems like justification of the assumption that "how the world is doing is the equal average of how everyone is individually doing". Quite simply the "direction of things" is either completely uncontrolled or controlled by a small group of people with incentives misaligned with the rest of the world. Everyone can be doing fine despite losing a war against them.
This can't explain the data in the article, such as the fact that people underestimate the rate at which other survey responders will report being happy.
1. Increasing amount of AI-generated code in their codebase, decreasing the quality of the service.
2. Bought by Microsoft, and their bad engineering culture has spread to GitHub.
Perhaps it's a bit of both.