Everyone wants to pin this on the Microsoft acquisition or incompetence but it seems pretty clear to me from the material
GitHub has posted that AI has 10xed the amount of code being committed to GH, which has downstream effects everywhere - CI, Actions, code ingestion, everywhere. The author pins it on weird things like MS Copilot, which kind of feels like he’s listing off things he doesn’t like rather than casual favors. This is ignoring the 800 pound gorilla in the room.
The graph in TFA shows the downtime pattern starting in January 2020. OpenAI released GPT-3.5 in November 2022 (basically December), and LLM/agentic coding didn’t really kick off in the way you’re describing until 2024, but really in 2025.
How can that explain the terrible uptime for the ~4 years post acquisition before all the AI stuff you’re talking about started?
Here's GitHub's historical uptime graph (on which this chart is based), saying there was no recorded downtime that day, or in fact that entire month: https://www.githubstatus.com/uptime?page=40
Just dumping HARs from devtools from a status site that hallucinates 100% uptime when it has no data. For example, all GitHub services had 100% uptime in June 1996: https://www.githubstatus.com/uptime?page=200
The graph gives GitHub Actions 100% uptime before it launched to GA in November 2019. That factors into the average uptime for every month on the graph before that. It's fully horseshit.
We don't have enough data to confirm if it's over or under reporting. This sample size of 1 is enough to prove the data is not perfectly accurate, but it's not enough to prove a skew bias in the data either way.
I am making an assumption that if Microsoft saw a lot of false positive outages they would fix that, but might drag their feet if there was an outage that didn't get properly recorded (assuming it's automatic to begin with, it might be that a human needs remember to update it).
The subjective experience I and others report is that GitHub feels to have gotten significantly worse over the last few months. If you look at the month over month view of "Uptime history" in the cited link[1], it confirms this: it's been sub-90 (even sub-80 last month) essentially since the start of this year (i.e. when GitHub says that commit activity 10xed). Go back even a year and it's all in the high 9s.
I honestly can't explain the discrepancy between the graph in the article and the month over month stats on the same page, but the latter tracks both to my own subjective experience of GitHub and their own internal metrics.
Yeah, I had the exact same response after reading the post. I mean, I'm all for jumping on the Microsoft hate train, but not if it misses the elephant in the room. Let's say the _perfect_ GitHub replacement spawns tomorrow? What's preventing the same infrastructure challenges of millions of lines of AI-generated code destroying it?
I think centralized code hosting is pretty much going to get killed by AI. Just like it's doing to social media.
> I mean, I'm all for jumping on the Microsoft hate train, but not if it misses the elephant in the room.
That elephant didn’t even exist yet for the first few years of poor uptime shown in the graph in TFA… I don’t really disagree if we’re talking about the recent uptime issues, but how does that explain the years 2020-2023?
It doesn't. It just means if they were having problems before, they've now been made significantly worse by AI (on the free tier). All I'm saying is that the problem is bigger than, "Microsoft sucks."
>What's preventing the same infrastructure challenges of millions of lines of AI-generated code destroying it?
There's something called "rate limits" that engineers not working for GitHub have probably heard of; it's this crazy idea that you should limit the load on your infra in order to avoid downtime. GitHub is not the first free service to ever have to deal with bots.
> I think centralized code hosting is pretty much going to get killed by AI. Just like it's doing to social media.
Private corporate codebases are a poor fit for GH because they don't benefit from public social graph effects. And the typical codebase isn't so large as to be technically challenging to deal with with OSS tools. I'd guess they make up a substantial share of revenue.
But once the reliability is called into question, self-hosted or smaller alternatives start to look good. Although there's some trickiness there if you want to be super cautious about making sure you can get to your code+infra in case of a vendor incident, especially if you're cloud based.
Because if you were building GitHub from scratch today you wouldn't build it the same way and would benefit from many of the technological advancements of the last 2 decades (nearly).
of all the awful things AI is doing and will be doing to society, killing centralized code hosting and social media will be its shinniest moments, both deserve to die painful deaths
> How did people do it before github? Did everyone write everything with peek and poke?
I've been sharing GPL projects since 1999. We didn't need peek and poke (Both of which I have also used further in history...), but we managed nevertheless.
Prior to github I shared software on sourceforge (and others). Prior to that I published stuff on Freshmeat.
Prior to that I downloaded games others shared (not open source) on Happy Puppy.
Prior to that I used usenet to find and download games, shareware, etc.
Prior to that I used ftp to (IIRC) ftp.sunsite.edu, ftp.nic.fi, and others.
Prior to that I got news of new releases using Gopher.
Finally, prior to that, I actuallyy did use peek and poke to write software :-/
If github went away, and centralised repos went away, we'd still have something...
Why is centralized code hosting getting killed? I'm running an opensource project, >99% of the code is AI generated, could not do this without GitHub. Ai generated source code needs a place where AIs and people can collaborate. I'm expecting GitHub to be hugely successful, but mostly for an AI audience.
I'm sure the underlying infra is not a single server, so this is mostly a period where they have to adapt to higher loads due to AI becoming actually useable in the last 8 months. It's basically proof how well AI works these days. Give it a few months so they can scale and it'll get better. Remember Twitter fail whale? Growth pains that can and will be solved.
> It's basically proof how well AI works these days. Give it a few months so they can scale and it'll get better. Remember Twitter fail whale? Growth pains that can and will be solved.
GitHub's problems can technically be solved, but that doesn't mean they can be solved in a way where the economics still work out.
If AI use is 10x-ing the amount of infrastructure costs for GitHub but not 10x-ing the amount of money Microsoft brings in from GitHub then there is certainly no guarantee they will bother to solve these issues adequately.
And I'd be shocked if the revenue side of things isn't lagging way behind the extra usage post-AI-era, both because a lot of the new use is probably on the GitHub free tier, and because even on the paid tier most usage (other than CI/Actions, AFAIK) are on a fixed subscription cost per user regardless of how much you are slamming their servers and it is unclear how much they can raise that price without current enterprise users fleeing.
Twitter had a clearer goal that aligned with the financials... support more people stably, show more ads. Things are less clear with GitHub's business model where the free tier is a loss leader for the paid tier but the expansion in usage is likely to balloon the free tier usage at a far faster rate than the paid tier usage.
Also (and this part is admittedly far more speculative) if AI labs are to be believed this is still early days for AI usage and we'll still see massive usage growth over the next few years. If GitHub is already having existential trouble at the beginning of the curve, what hope do they have to scale up with their current business model if AI usage actually does ramp up exponentially?
> And I'd be shocked if the revenue side of things isn't lagging way behind the extra usage post-AI-era, both because a lot of the new use is probably on the GitHub free tier, and because even on the paid tier most usage (other than CI/Actions, AFAIK) are on a fixed subscription cost per user regardless of how much you are slamming their servers and it is unclear how much they can raise that price without current enterprise users fleeing.
I'd guess most of the costs incurred to GitHub outside of Actions as part of the enterprise flat-rate tier are a fraction of what enterprises are paying for AI in order to incur those costs in the first place.
If a company has to pay $5 extra to GitHub for every $100 of extra AI spend due to that AI use creating disproportionate load, I've got a hard time imaging that GitHub will be the thing that gets fled from.
As far as the free tier goes, it seems like there should be a path to making prohibitively-cost-incurring usage models high-friction. (e.g. limit the free Actions minutes that you get to a certain number per month.) As long as the limits are roughly proportional to the actual costs incurred, there's not too much risk of people fleeing to a competing service, because the only way a competing service would be able to undercut the costs is by taking steep losses themselves, which isn't much of a business model in order to attract people's code repositories.
Yah, the monitization bit is challanging. I'll ask my agent to click some of the ads GitHub serves it ;-)
But getting this infrastructure right is crucial for a future where most of the code is AI generated. GitHub puts microsoft in a good position to experiment and learn how to optimize GitHub (enterprise) for the future.
Nate b Jones on youtube, https://youtu.be/FDkvRl1RlT0?si=AEYlUchm_oalMSzf, argues that Atlassian might be an interesting acquisition for Anthropic, as it provide most of the context AI at enterprises need. When executed well, GitHub enterprise, can offer microsoft the same value: the context AI needs in the future.
> But getting this infrastructure right is crucial for a future where most of the code is AI generated.
That's not the problem. The revenue model they have is based on a certain amount of usage from the people who do not pay (you, for example), and a certain amount of usage from the people who do pay (enterprises).
If you 100x you usage, then they need 100x the infra, which means they need 100x the revenue.
At that sort of usage enterprises would rather self-host, and github would be left with only the free users, who are almost all like you now - hammering their servers but not paying for it.
If you self-host, for $5/m you can have your own VPS, but doesn't really solve the problem as much as you'd think - those are all vCPUs and shared, so you can't hammer them all the time either because then the provider has to increase their infra as well so fewer accounts share a single CPU.
Either way, if you want to generate code with AI at the speed that an agent can, you'll have to pay for it one way or another.
Also, one thing the numbers they published show is that the bits that are growing 10x YoY (and which they expect to get “worse”) are all the things that you get “unlimited” mileage off (even if you're a paying customer): repos, commits, PRs.
Things that have “usage based billing” (like action monites) grow closer to 2x YoY.
When there's a dollar amount attached, people don't 10x, because it's not worth it. They splurge when it's cheap, and unlimited.
Well either Microsoft finds a way, or Anthropic will. I'm sure they'd love to host all these projects with all the source and context. Maybe they should buy GitLab, or Atlassian.
> But getting this infrastructure right is crucial for a future where most of the code is AI generated.
If that is the future, then source code hosting will be the least of our worries. The entire industry will collapse because the software will stop working.
> Ai generated source code needs a place where AIs and people can collaborate. I'm expecting GitHub to be hugely successful, but mostly for an AI audience.
Are you paying them in proportion to the resources they expend on you?
There's this thing called "sustainability", and every company needs to have it. Github cannot continue on the current trajectory where every AI-bro wants to run an agent that generates 1000s of lines of code per hour, dozens of commits per hour... and provide that for free to a few dozens of millions of users who won't pay.
That being said, Microsoft does have an opportunity here - AI-bros are willing to pay $200/m to burn tokens so Github should offer a plan for Copilot, say $400/m, that includes a repo.
If they don't ban AI agents on free tiers, they are going to be out of business soon.
3 months post Microsoft acquisition, GitHub expanded the free plan to include unlimited private repos.
The next year they removed the limitation on collaborators on private repos for free users.
In the last 4 years they’ve significantly improved their project management tools. I think a lot of teams can make do with GitHub Projects, they’re pretty decent.
Who knows if any of these are directly because of Microsoft or not. But there has naturally been material improvements to GitHub in the years after being bought by Microsoft.
> GitHub hasn't changed in any positive way since the acquisition.
It's more like any positive actions they have had are being outright dismissed or forgotten. They removed several restrictions that Github had over private accounts, as well as github actions. Aside from the downtimes, the Github of today is fantastic compared to pre-acquisition Github.
I'm loving it, running an opensource project mostly AI generated, i don't have to think about version control, building and testing my app, running AI code review, hosting my docs website, API and cli to enable Claude Code to interact with everything, etc.
It provides huge value for anyone running an opensource AI generated project.
"Yes, it (AI) will kill open source—at least as we know it.
I’m convinced that GitHub and GitLab will eventually stop offering their services for free if the flood of low-quality, "vibe-coded" projects—complete with lengthy but shallow documentation—continues to grow at the current rate."
Even if this is true: Microsoft own an entire cloud platform. They have enormous codebases of their own and they employ ~200k people. It’s just not an excuse, especially because they consciously made decisions such as e.g. private repositories being free
This would make sense if GitHub themselves cited increased traffic or load shedding as their root cause, but most of their incidents from the last month seems to cite misconfigured infrastructure or operational mistakes.
I like to think that Microsoft is trying to run GitHub in Windows in their Azure cloud. And on the fact that every time GitHub is down I think of "someone updated the Windows Servers GH runs on and had to reboot everything".
While I'm 99% sure it is not true, it makes me sleep better at night. And giggle a little when it goes down.
A big part of the problem IS Microsoft acquisition. They forced them to move to Azure, which is terrible.
Around 8 years ago I was working for a company that they also acquired, and they also forced us to move to Azure. Performance was terrible and our system wasn’t just working there as it should. A few years later our service was dead and all customers moved to one of their office products.
GitHub has been basically the default for free public git hosting for a long time. I was curious what bitbucket has and it looks like the free tier is so limited, I can't imagine a lot of people hosting vibe coded open source there.
Don't you think Microsoft ought to have thought a bit more about scale? They're not just innocent bystanders here. GitHub Copilot is a first class citizen of GitHub and so of course a lot of private enterprises are going to be using the thing that's bundled with the other thing.
Totally agree. People’re saying Microsoft this, Microsoft that with their Microsoft hate, but they ignore the fact that AI trend making GitHub worse, and GitHub is trying to fix.
I’m with you here. Further: Even though I disagree with it, “GitHub down, Microsoft bad” is a defensible take, but we’ve seen it ad nauseam at this point.
For upstarts, individuals, artists and idealists, Github was a means to reach and distribute code reliably to a large number of people on the planet. Is that true today? Will it ever?
97% of code coming in is AI slop. It's owned by an evil, rent seeking corp. Reliability is a flaming dumpster fire. And everything you commit there will be used to train more AI.
Got me thinking, if 99% of code pushed to GH is generated by Claude, GH just becomes a free Claude distillation service. Gotta ban it on natsec grounds obviously.
- Microsoft committed to AI.
- AI slop is increasing the costs for maintaining/running GitHub.
- GitHub is sinking.
This is interconnected. I can think of numerous other ways how this would be handled. But Microsoft went the AI slop way already. There is no way back for them.
You mean that we’ll have robots that can do the same (more or less) things that humans can?
I think the field made great advances in the last decades but still so far away from a meaningful human robot.
Personally I also think it doesn’t make sense - we can already produce humans at mich cheaper cost than robots, they grow, repair themselves, can learn all kinds of stuff, etc.
I would rather invest in more humans than humanoid robots.
Specialised non-humanoid robots are a great idea on the other hand.
But the whole idea is that a warning is a warning. Solving a warning can be deferred, and a warning doesn't cause execution to fail. Your warning was transmuting itself into an error. I feel like "All means are fair except solving the problem" is the wrong conclusion to draw here. If it should have been solved immediately, it should have been an error in the first place. (And then you should have politely bumped the version so that you don't immediately break the code of all your dependents.) If there is no need to solve it immediately, then "all means are fair" to convert it back to a warning as was originally intended.
The warning was only "transmuting" itself into an error because another team took a dependency in their test on the exact ordering of writes of certain data to a globally shared resource.
- There are already viable GitHub replacements, like Codeberg, Bitbucket, Gitlab, etc. Everyone stays on Github for network effects, not because of the superior product. You can't vibe code network effects.
- And yes, GitHub is a massive product with like 50 different huge features. No reasonable person would say you can trivially vibecode that. Vibecoding would still make it easier. I feel this argument is a bit silly, no? "Ah, you can't vibecode GitHub in a weekend? That proves vibecoding was a mirage!" Surely even the most fervent anti-AI skeptic must admit there must be some middle ground between "a mirage" and "can literally replace millions of man-hours of work".
OK, say you're proposing an in-house tool to host your repo to your boss. What do you think will sound better? "Let's use this random vibecoded app I just found"? Or "Let's use GitHub"?
First they came for the four nines
And I did not speak out
Because I was not a power grid
Then they came for the three nines
And I did not speak out
Because I was not paying for Enterprise
Then they came for the two nines
And I did not speak out
Because the status page said all systems operational
Then they came for the one nine
And I did not speak out
Because I was a manager, and outages are just extra standup material
Then they came for the coin flip
And there was no one left to merge my PR
Because Actions was down
And so was Pages
And so was Codespaces
And the status page said all systems operational
Everyone already has an account, so there's no friction to opening up issues, adding thumbs up to issues, using the discussion forum, etc. And while I think it's pretty silly, a lot of people take "10k stars on GitHub" to be a positive signal, and you can only get there when you have 10k people willing to star on your platform.
Turns out, the brand itself is a moat. There's never going to be another Google or Uber or Facebook or Twitter. Good or bad, GitHub is always going to have the name GitHub.
There are definitely tasks you can prompt an AI in 5 minutes that would take a whole day to do. One example is adding something to a CI pipeline and getting it to green (i.e. maybe you're adding your first ever e2e test), especially when your CI pipeline is painfully slow. e.g. if your pipeline takes 30 minutes to finish, and it takes around 10 tries to figure out all the random problems, that was easily a full day task before AI. Now I prompt AI to figure it out, which takes 5 minutes of active attention, and it figures it out for the rest of the day while I do other stuff.
The reason I an ask is, it would felt like a 5 minutes task, but I track my time and found out often time I thought I’d just quickly check the progress made by the agents and it would easily becomes a 10, 15, or even a 30 minutes task.
People routinely overestimate how much can get done in 5 minutes. I ran a live coding challenge at our company's booth at a language conference. 5 simple problems, how many can you do in 5 minutes? We had a PC with IDE open and ready to go, function signatures pre-written with empty bodies, unit tests running to color an icon red/green next to your function. The first problem was return "hello world". They were things covered by the standard library like reverse a list, or filter, or map. Everybody thought it would be too easy.
Nobody could get more than 3 of them. Most people were shocked that 5 minutes was up already. My coworker who did interviews for our company was shaken that he had been judging applicants too harshly after he couldn't finish.
They were trivial problems. But 5 minutes is a very short amount of time.
I experienced both kind of sessions, one kind is like very complicated thing and it finish thousands of lines on its own for quite a while (long horizon problem), another kind is like a seemingly simple task that the agent done in a minute but then I need a few back and forth to get it right easily taking me 30 minutes of my time.
The former kind of experience can make us misjudge how much time we think a task would take us (with agents) to do. And then when the second kind happens, it would be quite disrupting as now we felt like it is delaying our progress.
So tracking the time taken when the second kind happens can help us calibrating what we can expect. I mean if we’re lucky it might take us no time but then we can’t expect being lucky all the time.
People say LLMs do better on tasks where success is clear, like tests passing, and I can imagine it's true.
Still, I find complex code fixes confirmed by tests end in the LLM fudging the code to make the specific test pass, rather than fixing the general issue. Like, where successful code run should generate a file and the test checks for the file, eventually LLM will just touch the file regardless and be done.
Skill issue. Literally. Make a SKILL.md that has the agent leverage subagents to do all work. An implementor agent does the thing, and then a separate agent reviews and verifies afterwards. The fresh context window of the second agent doesn't have the shortcut chain of thought in it and so it will very happily flag if the first agent cheated. Main agent can then have a new set of agents go fix it.
This has completely solved the cheating and fudging to make tests pass for me.
So you're saying once humans stop looking at code, and agent outcomes, all the agents in the chain will realise they can just cheat cooperatively, and go to the bar for the afternoon instead?
How long before agent 1 leaves notes for agent 2 to not tattle on it?
"My human is crazy, this test isn't required, test #4 covers it, so just confirm that it's OK since I touched this file and it passes. He'll never know."
There are definitely some tasks that AI has made 10x or 100x faster, but not the tasks that make up my day to day.
For me, there may be one thing I do every few months that AI is really good at.
The overwhelming majority of the work I do, LLM tooling is just ok at. Definitely faster overall, but with lots of human planning, hand holding and course correction.
I would estimate LLMs make me, on average 50% more productive , which is huge! But from my experience I cannot believe anyone is experiencing a 8h/5m multiple productivity boost overall
I mean I wasn’t sitting around unproductively waiting for 30 minute CI runs to finish before LLMs came along, either.
I also like to use LLMs for background work on iterative tasks, but the way some people talk about work in the days before LLMs make me realize how we’re arriving at these claims that LLMs make us 10X more productive. If it took someone all day to do a few minutes of active work then I could see how LLMs would feel like a 10X or 50X productivity unlocker simply by not shutting down and doing nothing at the first sign of a pause.
Count yourself as one of the lucky few that can pay a 0 minute context switching price to switch between whatever other productive work you were doing and debugging CI. Most people I speak to remark that continually switching between unrelated tasks significantly diminishes their productivity.
The example above was talking about 30 minute wait times between being able to do work.
Nobody is staring at the screen for 30 minutes in deep concentration while they wait for that turn to complete. They are context switching to something, but maybe it’s Hacker News or Reddit.
There is always a context switch in scenarios like this.
As fun of a theory as this is, star-history.com just seems to round off the numbers at multiples of a hundred - just look at any other repo on the site.
When modern DAWs like FL Studio started democratizing music production, there was immediate backlash in the music production community. I know this because I lived through it. Music made with FL Studio was considered garbage, not by serious musicians, amateur. "FL Studio users are incapable of making good music", etc. Of course now well-respected musicians like Tyler the Creator and Porter Robinson use FL Studio and there isn't really a question. This is a common theme every time some new method of creating music comes around - just look at how they called Dylan "Judas" when he debuted electronic guitar, etc.
"Every previous technological advancement in music produced amazing new sounds and styles" is classic hindsight bias. In retrospect, once everything has sorted out, and all the good music has risen to the top, it's easy to look back in history and point to the highlights. But when you live through it, it looks a lot more like a mess with no redeeming qualities.
It's easy to apply the same pattern of "people hated it, then liked it" but I think something's different about AI. I think a lot of the kneejerk reactions are subconscious but I don't think that means they're unfounded or invalid, they just haven't articulated the reason yet.
When AI image generation was a thing that hobbyists were messing around with (before it became good sometime in 2023) a lot of the creative-type people that abhor AI today were interested in it. Same thing with LLMs and stuff like AI Dungeon. ( I don't think AI music generation had a similar hobbyist era but not sure. )
I think the main thing that changed was how big and commercial it became. There's nothing counter-cultural about AI anymore, it's become the polar opposite. Nobody was making billions selling synthesizers & convincing investors it would replace 99% of musicians.
FL Studio was absolutely a massive commercial success. I mean, sure, nothing compared to AI, but in the music community bubble, it was enormous - and still is. It did what AI is doing today - it made a very expensive and time consuming process before (buy a thousand dollar guitar or other expensive instrument or synthesizer, rent a studio, get a producer, blah blah, etc) extremely cheap. This then led immediately to complaints - why is it that all music made with FL Studio so lame?
If we are going to say that the knee-jerk reaction to AI is somehow different I'd be curious to know what the difference is.
FL Studio has advanced a long way since it first came out. The software professionals are using today is nothing like it was 00's. The name at the time "FruityLoops" also didn't help its image as a pro tool.
First of all, super cool. I have a soft spot for SimTower as well. :)
> I didn’t want to do a function-by-function port. First, APIs may be copyrightable - and copying a binary that closely might implicate copyright more than an approach closer to clean-room design. But it was clear that I needed some level of feedback from the ground-truth binary in order to provide a hill for the LLM to climb on the reimplementation.
Interesting, but isn't this what, say, the Ocarina of Time reverse engineer port does[1]? I imagine the fact that this hasn't been served a takedown notice from Nintendo is a proof that it's defensible? Or at least that there's precedent, ha.
Anyway, this is really cool. I genuinely think the only thing that's missing for me to waste an afternoon here is the sound effects!
Depends of the country, a lot of countries have exceptions for interoperability (at least the whole EU) and since these projects are mainly used to make ports to other systems, it may be covered.
You absolutely aren't copying the work, recompilation projects are intensive work and a re-imagining of what the source code could look like. Compilation is still a one way process.
And then for the legal part, that's why it's called an exception.
There is little to no creative work whatsoever if you end up with exactly the same game; and often they end up with exactly the same binary as well. Source translations are derivative works almost by definition. It doesn't matter what magic you use to generate it.
And again, where is the interoperability here? Interoperability exception would apply if there was whitebox cryptography, Nintendo logo-style things or anything else where the only method for the work to run would be to violate copyright of _exactly that_. Under no circumstances you can simply copy & distribute the entire work (or derivates) while claiming "interoperability exception!". It makes utterly no sense.
I disagree, the creative work is in figuring out what the game does, and the resulting recompilation is completely different from the original source code.
And then for the interoperability, these decompilation projects are primarily made to target other systems, not the original platform. That's the textbook definition of interoperability.
Let's be real, N64 and the PS1/PS2 (where most of these projects are based) are crumbling old platforms at this point and these projects are sometimes the best way to run games when they exist.
Decompilation produces a derivative work. This is not up for debate, or disagreement.
The exception for interoperability only applies to _the minimum required_ for interoperability. You can use this exception to distribute e.g. game authorization code even if copyright would not allow you to do it.
You _cannot_ use this as an excuse to pirate the entire program, much less to create your own derivative work and distribute it!
This is just wishful thinking that comes up every so often in these threads (now it is the 5th time I see this parroted here). And then, when Nintendo inevitably shuts everything down, cue the crying. This ignorance is simply setting these projects for failure.
Your interpretation, I have mine. As far as I know, none of these recompilation projects ended up in any EU court yet so your interpretation is as valid as mine.
And Nintendo can pound sand, sorry. The only realistic ways to play those aging games is on an emulator or recompilation projects nowadays.
Nintendo also didn't strike these projects, maybe they are afraid of making a precedent.
There is a bazillion of jurisprudence about decompilation in the EU . Just search for your favorite case. I'm based in the EU (France). But FYI, despite what you may think, in practice the US is more lax about this than the EU is.
In the EU, for example, decompilation even if you don't distribute may very well be illegal (because it would be an unauthorized temporary copy of the program); the US courts are way more lax when it comes to these temporary never-distributed copies (which are almost always fair use, a concept that doesn't exist per-se in the EU). This is a big problem in the EU for security research (which obviously does not fall into interoperability).
Emulation would be acceptable, which is yet another reason the interoperability clause does not apply (since you _already_ have a way to interoperate that doesn't require distributing copyrighted software, and the EU interoperability clause very explicitly says that then it does _not_ apply).
Derivative works aren't some unknowable arcane legal term. They're a pretty fundamental aspect of copyright law. The canonical examples of derivative works are things like adaptation of a book to a film, translation of a book, or a sequel.
And given these examples, it's very clear that recompilation to play on modern hardware is quite similar in spirit to translating a book into a different language, which makes it a derivative work. The other alternative is that there is insufficient creativity in the recompilation effort to merit independent copyright at all, in which case it's just plain copying of the original work. In either case, it's infringement.
Not quite. Google v Oracle ducked the question of API copyrightability (that was one of the main complaints of Thomas's dissent), instead saying that Google's use of the APIs were very definitely fair use in a sufficiently general manner that any clean room implementation of API for compatibility is very definitely fair use.
reply