Do any refactorings in separate reviews, and say things like "REFACTOR_ONLY:", with a rule that none of the code changes behavior.
That makes reviews a lot easier. The review starts from "nothing should be changing" and then reviewers can pattern match on that.
Otherwise, the reviewer is re-evaluating every line of code to make sure nothing has changed. That's really hard to do properly.
The version control systems I've worked with have allowed queues of changes, each one reviewed independently. As I'm developing, if I need a refactor, I go up a commit, refactor, send out for review, rebase my in progress work and continue.
I send out a continual stream of "CLEANUP:" "REFACTOR_ONLY:", and similar changes with the final change being a lot smaller than a big monster of a change.
Your reviewers will appreciate the effort.
Plays the metric game (if you're working in that type of org) without being evil too.
Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.
I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.
Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?
Python's pip predates npm, installs dependencies automatically, can include binaries, and the old-style packages could run arbitrary code during the install.
Ruby gems are older than that, but I have no idea what capabilities it has/had.
Is it really a constant rate? Or is it a Law of Large Numbers kind of thing, where past a certain scale the randomness gets smoothed out and looks constant? Or something else?
(Obviously some developers are better or worse than others, so I presume your observation is assuming developer skill as a constant.)
You are communicating with future readers of the code. The presence of ConcurrentHashMap will lead future engineers into believing the code is threadsafe. This isn't true, and believing it is dangerous.
Weirdly, I remember in school when I was about 8, everyone in my class was given this making of documentary on VHS. Not sure why, I assume someone got a big box for free.
Yes, it sets the reviewer's expectations around how much effort was spent reviewing the code before it was sent.
I regularly have tool-generated commits. I send them out with a reference to the tool, what the process is, how much it's been reviewed and what the expectation is of the reviewer.
Otherwise, they all assume "human authored" and "human sponsored". Reviewers will then send comments (instead of proposing the fix themselves). When you're wrangling several hundred changes, that becomes unworkable.
Phone companies are required to make sure 911 works on their phones. Random people on the internet aren't required to make sure 911 works on random apps, even if they look like phones.
That makes reviews a lot easier. The review starts from "nothing should be changing" and then reviewers can pattern match on that.
Otherwise, the reviewer is re-evaluating every line of code to make sure nothing has changed. That's really hard to do properly.
The version control systems I've worked with have allowed queues of changes, each one reviewed independently. As I'm developing, if I need a refactor, I go up a commit, refactor, send out for review, rebase my in progress work and continue.
I send out a continual stream of "CLEANUP:" "REFACTOR_ONLY:", and similar changes with the final change being a lot smaller than a big monster of a change.
Your reviewers will appreciate the effort.
Plays the metric game (if you're working in that type of org) without being evil too.
reply