Hacker Newsnew | past | comments | ask | show | jobs | submit | jpollock's commentslogin

Do any refactorings in separate reviews, and say things like "REFACTOR_ONLY:", with a rule that none of the code changes behavior.

That makes reviews a lot easier. The review starts from "nothing should be changing" and then reviewers can pattern match on that.

Otherwise, the reviewer is re-evaluating every line of code to make sure nothing has changed. That's really hard to do properly.

The version control systems I've worked with have allowed queues of changes, each one reviewed independently. As I'm developing, if I need a refactor, I go up a commit, refactor, send out for review, rebase my in progress work and continue.

I send out a continual stream of "CLEANUP:" "REFACTOR_ONLY:", and similar changes with the final change being a lot smaller than a big monster of a change.

Your reviewers will appreciate the effort.

Plays the metric game (if you're working in that type of org) without being evil too.


Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.

If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?

It's a software engineering version of the professional engineering stamp.


Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.

Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?


This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.

I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.


Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?

Python's pip predates npm, installs dependencies automatically, can include binaries, and the old-style packages could run arbitrary code during the install.

Ruby gems are older than that, but I have no idea what capabilities it has/had.


Is it really a constant rate? Or is it a Law of Large Numbers kind of thing, where past a certain scale the randomness gets smoothed out and looks constant? Or something else?

(Obviously some developers are better or worse than others, so I presume your observation is assuming developer skill as a constant.)


Well, I think there are two things at play here.

1) As org size grows, it's the team's average quality that matters (so yes, large numbers).

2) Even with a single team, the velocity will increase to match the acceptable level of quality.

Management will push the accelerator until they get too many bugs, then it will be "we need fewer outages".

So, in an team+environment, you end up with a constant (in time) detection rate, which basically means a constant in time injection rate.

If the teams' velocity increases without increasing quality, the bug injection rate (and detection rate) will increase.

AKA if the AI is slightly worse, but 10x faster, stop carrying the pager. :)


curl ... | sudo bash

yolo!


Code you ship vs tooling you use to build the code.

So, the product vs everything that is needed on the way, but isn’t the core.

CI/CD tooling, template population…. Things you write a use once/use few script for.

I typically end up with a library of tools to deal with repetitive finicky tasks.


You are communicating with future readers of the code. The presence of ConcurrentHashMap will lead future engineers into believing the code is threadsafe. This isn't true, and believing it is dangerous.


No, they'll believe that that specific map is thread safe.


A replaceable battery needs protection. One in the device gets protection from the device.


The replaceable battery is still inside the case. How is it more protected because "glue"?

I also replaced glued batteries in phones following ifixit instructions a few times (using a hair dryer/heat gun).

They didn't have any less or more "protection" than the replaceable ones. They looked exactly the same apart from the connectors ofc.

Please substantiate your claim. Until then I call it BS.


The video title is searchable on YouTube:

The Making of Anastasia (1997):

https://www.youtube.com/watch?v=mWffn2clRGk

To the discussion of live-action reference:

https://youtu.be/mWffn2clRGk?t=941


Weirdly, I remember in school when I was about 8, everyone in my class was given this making of documentary on VHS. Not sure why, I assume someone got a big box for free.


That's deterministic dispatch, as soon as it forks or communicates, it is non deterministic again?

Don't you need something like a network clock to get deterministic replay?

It can't use immediate return on replay, or else the order will change.

This makes me twitchy. The dependencies should be better modelled, and idempotency used instead of logging and caching.


Yes, it sets the reviewer's expectations around how much effort was spent reviewing the code before it was sent.

I regularly have tool-generated commits. I send them out with a reference to the tool, what the process is, how much it's been reviewed and what the expectation is of the reviewer.

Otherwise, they all assume "human authored" and "human sponsored". Reviewers will then send comments (instead of proposing the fix themselves). When you're wrangling several hundred changes, that becomes unworkable.


That can have some very extreme legal ramifications.

Consider - it's a voip dialing client which has a requirement to provide location for E911 support.

If the OS vendor starts providing invalid data, it's the OS vendor which ends up being liable for the person's death.

e.g. https://www.cnet.com/home/internet/texas-sues-vonage-over-91...

which is from 2005, but gives you an idea of the liability involved.


Phone companies are required to make sure 911 works on their phones. Random people on the internet aren't required to make sure 911 works on random apps, even if they look like phones.


The comment you're replying to literally has an example of an internet calling service being fined $20,000 for not properly directing 911 calls.

I guess Vonage should try to appeal the case and say pocksuppet said they're not required to do that.


Vonage sells phone services that happen to use the internet. This is not the same as being WhatsApp.


It can't have "extreme ramifications", Google's own phone couldn't call 911 for a while.

And you can manually force only the voip dialing apps instead of everyone


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: