Isn't the usage of Autopilot "willing participation"? Or are you referring to the other people on the road?
Wouldn't that also be true of your medicine analogy? If someone takes an experimental medicine but experiences an unknown side effect and passes out while driving, isn't the risk profile equivalent?
That hypothetical certainly is far less probable than the Autopilot case, but for any experiment that takes place outside controlled conditions, there can always be unknown and unforeseen consequences that affect people who have not given explicit consent to being part of said experiment.
To argue that we cannot test things unless all potentially-involved parties have given consent is to argue for an impossibility. There is always some small amount of risk of higher-order consequences.
I would even go further and say that arguing for such is an unnecessarily conservative approach that hamstrings any attempt at improving the world. See, for example, this commentary on the current state of IRBs [1]
There are plenty of drugs after which you can’t drive.
Also, the “danger to others” aspect is even then negligible as opposed to some faulty software controlling a ton of steel/battery in a system that depends heavily on predictable behavior of each other.
Safe driving for a human also depends heavily on predictable behavior of others. If that's an important metric, it seems like the optimal solution is a single self-driving system with a monopoly on the market... that way each and every driver is predictable, correct?
I guess I don't understand your argument. We put faulty humans in control of a ton of steel/battery in the same system all the time; humans have the disadvantage that when lessons are learned, they aren't transferrable without great amounts of effort. Self-driving systems can share learnings across the whole fleet with software updates, which seems like a strictly better solution in the long term.
It seems like you're making perfect the enemy of "better".
Wouldn't that also be true of your medicine analogy? If someone takes an experimental medicine but experiences an unknown side effect and passes out while driving, isn't the risk profile equivalent?
That hypothetical certainly is far less probable than the Autopilot case, but for any experiment that takes place outside controlled conditions, there can always be unknown and unforeseen consequences that affect people who have not given explicit consent to being part of said experiment.
To argue that we cannot test things unless all potentially-involved parties have given consent is to argue for an impossibility. There is always some small amount of risk of higher-order consequences.
I would even go further and say that arguing for such is an unnecessarily conservative approach that hamstrings any attempt at improving the world. See, for example, this commentary on the current state of IRBs [1]
[1] https://astralcodexten.substack.com/p/book-review-from-overs...