Sure, but the costs of code duplication are well known. We know it increases maintenance, and can sometimes lead to issues if you forget to update one or another of something, and so on.
So there can be an assumption that deduplicating removes costs, when it may, but it may also create further ones. That removing something can make some things more difficult, isn't intuitive for everyone.
Like all things programming, their is a balance, pros and cons, of each approach. Knowing when to use which approach, that's part of the profession, and everybody can get it wrong sometimes. And the environment can change, and the choice may become invalidated - and then you get stuck with the hard choice of changing abstraction, or keeping the same. And that's a hard choice, as well.
Nothing in coding comes for free, but sometimes it can look like it does.
What's the cost? A slightly bigger binary & codebase? It seems like it's close to free to me. Am I missing a cost? Or are these costs bigger than I'm assigning them?
The cost is exactly what is pointed in the original tweet:
> a requirement to keep two separate things aligned through future changes is an “invisible constraint” that is quite likely to cause problems eventually
Code changes, and if those two identical or similar pieces of code are likely to change together, now whenever you change one you have the cognitive load to also change the other, or risk having them go out of sync.
Of course, when the two pieces of similar code aren't likely to be changed together, they should be kept separated
For sure. Most of the time when I copy-paste code from one place to another, a change in one place doesn't imply a change in the other. I certainly have seen that happen though.