Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla Again Paints a Misleading Story with Their Crash Data (forbes.com/sites/bradtempleton)
104 points by xnx on April 26, 2023 | hide | past | favorite | 198 comments


I keep hearing horror stories about Tesla vehicles and I'm amazed that they're allowed on public roads (in the U.S.) with what appears to be alpha-level software. I appreciate that the FSD mode is only to be used with the driver constantly monitoring and being aware, but that's an unrealistic expectation of people, not to mention a strange interpretation of "full self-driving".


Honestly, all due to the Musk influence you can "move fast and break things" even when it comes to 1-ton machines with humans inside moving at 80 miles an hour. That Tesla was allowed to test in production with basically no legal liability is so messed up


FWIW: Tesla is absolutely liable for any damages resulting from AP/FSD usage. They aren't protected in any way by law or regulation. You want to sue them, you go right ahead.

Normally I'd expect the fact that they aren't losing these suits regularly to factor as a prior in people's opinions about the cars' safety. But I've also been in enough of these threads to know that it's actually all about people's opinions of the CEO and not the product.


The product is ridiculous regardless of the CEO. It's at least false advertising or fraud. Looking at any video of it in action it's clearly also dangerous. It's not all about cherry picking either, on YouTube these are all big fans trying to make it seem great, but every few minutes it does something crazy.

I think the fact that it has supporters is because of the CEO, not the fact that it has detractors, though I'm sure some are more vehement because of the CEO.


> The product is ridiculous regardless of the CEO.

It's amazing. The most fun I've had in a vehicle in my whole life, and absolutely worth the purchase price. I'm sorry you aren't impressed, really, and I genuinely can't understand why there's so much hate about something so great.

But none of that is relevant here. The question at hand is "is it safe?", and the clear answer is "yes, quite".


>It's amazing. The most fun I've had in a vehicle

It may be fun for you but its not fun for all the other people on the road put at risk by Tesla owners that dont know the limitations of their vehicles.


Two weeks ago I took a ride on a friend's Tesla Model S with Autopilot 2.0 here in Sweden, with the FSD feature enabled, he tried to show me how awesome it was. Driving around for 15 minutes on the highway was nice until we got into the streets inside a part of the city, not even 5 minutes in the city the car tried to steer itself into a curb during a turn, almost hitting a pedestrian waiting for the light.

No thanks, that shouldn't be allowed on public roads.


I used auto steer for about 10 minutes before it got confused on a lane split and tried to crash me into a light pole =( I hope it would have stopped before hitting it, but it really didn't look like it was going to!


That's not correct, they are no more liable than carmakers are if you are using cruise control.

Unless you can show in court that their system was defective. It is designed to not be able to do the full driving task, to need interventions. It performs as designed.


Isn't that what I said? The logic above would work the same for cruise control. If you ship a bad cruise control or other automation system, you get sued and lose. Toyota paid out more than a billion dollars a few years back due to[1] problems with their fly-by-wire accelerator design. It's not like you can't punish an automaker for safety issues. They get sued all the time, successfully.

Except, notably, the one you want to yell about the most. I'm just saying that should maybe be part of your analysis.

Look. The cars are safe. We've been having the same argument, again and again, for years and years now. People show up in these threads just taking as a given that these systems are unsafe, and refuse to adjust that prior as these cars become some of the most popular products as a market.

[1] What were perceived as -- IIRC it's since been argued that the problem in the suit didn't actually exist. And they settled anyway!


> only to be used with the driver constantly monitoring and being aware, but that's an unrealistic expectation

FSD monitors the driver for indicators of inattention, e.g. not looking at the road. First you get a visual alert, then a shrill alarm, and if you're still not paying attention the vehicle slows to a stop. Drivers with a history of inattention are locked out of FSD.

Contrast with traditional cruise control where you can fall asleep at the wheel going 100mph and it'll hold that speed until you crash. Of course no one wrings their hands about that.


People suck at context switching. Even if the driver is looking at the road, they can’t suddenly be put in the context and expect to do well.


There is a very good reason why people don't trust this absolute snake oil in the first place other than it being pushed by Tesla and Elon.

FSD (Fools Self Driving) feature is a complete scam with dangerous and deceptive advertising putting the lives of the driver and others at risk. The Tesla vehicles themselves are not.


Elon is obviously doing many stupid things but there’s many smart people at Tesla working on this issue so it’s not exactly a complete scam.

Isn’t it more realistic that FSD is one of those classic engineering problems where the last 10% takes 90% of the effort?

Also, the “move fast and break things” is symptomatic of the broader tech industry. Despite being glorified for decades, it’s become a toxic mentality that’s hurt us with social media, and is now exponentially more pronounced with AI. Sure, maybe Elon should be “made an example of”, but collectively we need to sober up and realize this is a systematic issue in tech rather than scapegoating one eccentric billionaire.


"Move fast and break things" isn't entirely inappropriate in the world of non-consequential software, i.e. software not affecting lives or money. But for software affecting lives (people could die) or money (deposits and transfers could go awry) then "move fast and break things" is unethical. Obviously this bit would apply to things such as cars and airplanes, too. You really don't want us building the controls of a nuclear power plant, do you? Probably not.

I'm appalled the NTSB hasn't reigned in Tesla. I'm cynical enough to start wondering what officials they've "bought off."


Would it be unethical if FSD Beta causes 1,000 fatalities but FSD 1.0 prevents 1,000,000 fatalities? Real world training data is needed to get from Beta to 1.0. Seems like you are oversimplifying the ethical decision being made by NTSB.


Two points:

1) Tesla hasn't demonstrated that future hypothetical, so it's a speculative argument at best.

2) It's still unethical if they could have achieved that more safely by following industry norms and best practices, which they consistently fail to implement. Tesla could hire safety drivers for testing before public releases, but they don't. They could implement more redundancy to mitigate SPOF, but they don't. They could file the same regulatory paperwork others do (insufficient as that is), but they don't do that either. They don't even use industry standard terminology and safety concepts like ODDs.


I'm not interested in dying for Tesla. Real people, innocent people doing nothing wrong, have been killed. It's past time to stop this madness.


I put more of the blame on a related but slightly different principle, fake it till you make it. The subject matter makes a huge difference in how bad that approach is. Falsely claiming that an app uses AI to identify if a picture is a hot dog or not is dishonest, but not the end of the world. Lying about a medical test being valid, or that a car is fully self driving, is a whole other thing. Those lies directly result in people being killed and should not be tolerated.


Agree that “fake it till you make it” is a good distinction to make. It’s also been glorified e.g. do things that don’t scale.

But technically, Tesla is “fully self driving” in the sense that it can autonomously navigate, steer, etc. That doesn’t mean it won’t crash. It’s also still in Beta.

In the long run, I think FSD will become better than the majority of human drivers. Seemingly, the only way these remaining edge cases can be solved is by collecting more training data from real world driving scenarios. So the NTSB is faced with the difficult decision of allowing this Beta version to crash and even cause fatalities in the short run, betting on the prediction it will drastically improve road safety in the long run. It’s a complicated dilemma, in my opinion.


> In the long run, I think FSD will become better than the majority of human drivers.

You fell for a very common misuse of statistics there. It may become better than the average human driver, but the sample for that average includes the 5-10% of drivers who cause almost all accidents. Like for other things where power laws apply using averages (or medians) on that data essentially removes any information.


I said majority, not average, although what I really meant was anyone who is not a professional driver. But seems like you’re making a stronger argument for my point than I am for putting FSD Beta on the road to replace that 5-10% of bad drivers even sooner.


But you don’t advertise, let alone deploy a product when 90% of it is missing.

FSD likely requires GAI.


Oh, but you do! Moloch demands it.

The GAI requirement is an interesting theory but why? I find it dubious because, at least personally, driving seems to occur semi automatically in my motor cortex. When a car pulls out in front of me I’m not using my frontal cortex to slam on the brakes. It feels involuntary.


You have 86 billions of neurons with orders of magnitudes more connections between them — the motor cortex does its thing without conscious thought, but your brain still continuously processes the world using all of your senses, using its n years of experience/knowledge.

From a few anecdotal experiences in this thread, you would be able to tell subconsciously that that Burger King sign is not a stop sign, not even for a brief second, that floating plastic bag is not an object that one has to avoid collision with and there are just millions of edge cases humans properly handle.

Of course I might be wrong and it might not require GAI, but there is still no human construction that can even close on a human brain.


So that's a good hunch. Where is the data showing that Tesla software is causing more crashes than say people driving base? Without data, why are you thinking that something should be not allowed on the roads? Opinions without data are just that.. opinions. Would be a pretty terrible way to regulate.


Wait what? Tesla should be the one providing the data showing that it is safe for their software to be on the road, not the other way round. Vehicles have always need to be regulated to be on the road, rather than retroactive ban after they kill a bunch of people.


> rather than retroactive ban after they kill a bunch of people.

I grew up in the US, in the 1970s-80s, when we had [optional] seatbelts, that would cut you in half, and snap your spine, exploding [leaded] gas tanks, glass that would slice you to ribbons, and steering wheels that would impale the drivers.

It's always been retroactive. Ralph Nader made his name on Unsafe at Any Speed[0].

Money speaks very loudly in the US, and we have always been willing to sacrifice quite a few things (as long as the non-money-makers are making the sacrifices), and take huge risks (as long as the non-money-makers are taking the risks), for money. It's just the way we are.

[0] https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed


The problems with this policy are obvious, but it brings benefits too: we are allowed to develop new technology. If self driving has to be rigorously proven before it can be tested, the chicken-egg problem simply resolves itself into "the tech won't be developed here." Is that really what HN wants?


As a pedestrian and cyclist, what I want is to not become a statistic in a crash report. We can develop new things and test them under controlled conditions without having to put everyone outside the car at risk.


AI is easily our best bet against the true menace, which is distracted drivers.

No, "controlled conditions" don't do a very good job of training or proving an AI. Supervised cars on actual roads is the way it has to be.


The best bet against the true menace is to deploy public policies that reduces the true menace: the number of cars/drivers on the streets.

That's the most sound policy, provide the public with alternative means of transportation that are convenient and more environment-friendly, leave room on the streets for the ones that actually require a car for their transportation. If you leave in the middle of bumfuck nowhere and can't be served by public transportation, great, you probably need a car. If you drive your car in cities for trips of 5-10km because it's convenient then we, as a society, should find ways to disincentivise this usage.


Speed limiters and automatic braking get most of the benefits without any need for hard AI problems. What AI promises is a way for governments not to have to take the heat for holding drivers accountable, but hundreds of thousands of people are being killed waiting for it to materialize.


https://youtu.be/fPF4fBGNK0U

Modern car crash test into older car.


Looks like the Malibu driver gets out, and cusses out the splattered remains of the Belair driver, who is removed from his car, with a teaspoon and a sponge.


> Tesla should be the one providing the data showing that it is safe for their software to be on the road

The linked article is literally about the data Tesla provides showing that their vehicles are safe, and it doesn't even refute the conclusions. It's about nitpicking at the margins.

To be clear: Tesla's data says autopilot use is about an order of magnitude safer than human driving. The author complains that this is confounded by the fact that AP is used in different conditions where the baseline accident rate is lower.

You need a lot of confounding effect to make up for an order of magnitude delta! Seems like everyone wants to skip that part and yell about the CEO instead.

The cars are safe. They are extremely common vehicles now. If they were unsafe we would know. We know they are safe. Any argument needs to treat with this truth, or it's basically just a conspiracy theory.


> It's about nitpicking at the margins.

The article is not nitpicking at the margins. It's showing how Tesla is making misleading claims out of their insufficient data by not accounting for operating conditions. That's literally how Tesla is claiming Autopilot is "an order of magnitude safer than human driving".

> You need a lot of confounding effect to make up for an order of magnitude delta!

Or you could just make misleading comparisons to claim that. That's pretty easy too.


It's just tiresome. You know the cars are safe. You want to say they're unsafe. But you can't, because they're safe. So you're yelling about what amounts to marketing strategy.

Which, fine. Everyone does bad marketing, there's lots to yell about. Maybe consider yelling about the deceptive marketing practices of a product you do use? There are lots to choose from.


"Our cars are safe"

"Show us unbiased data proving it"

"If it were unsafe, we would know"

Sigh.

I own a Tesla and use FSD, but thanks for assuming.


They did show the data. You want to throw the data out instead of believing it. Nitpicks aren't winning arguments, and you know that. I'm saying that if you want to argue with someone's data, you need to argue with data and not just that some different experiment would be better.

What the linked article is doing is just excuse-making, basically.


Huh, what? It's completely reasonable to question the methodology. That's not nitpicking.

I get it. You want to take the data at face value without questioning anything.


To be blunt: I don't take your words here to be a dispassionate "questioning of methodology". If that was your goal, you would at least nod to the numbers and the clear fact that even if the measurements were confounded they probably point to a safe system.

I'm not saying take it at face value at all. I'm happy to discuss methodology and to admit that the numbers on Tesla's site amount to something closer to marketing than science. But nonetheless, the cars remain safe. This methodology would have to be outrageously confounded to produce a result like this for an unsafe car, and you know it.


> The linked article is literally about

Dude, you couldn't even get the date of the article right.


Only Tesla has that data, so no one can link to it.

If it showed things were better than average it would fit their message and I’d expect them to publish it.

If it showed things were average it would not fit their narrative. If it showed things were worse it would contradict their narrative.

Seems like there is only one reason to withhold numbers or publish them in a misleading way. They prove your claims that your tech is way safer wrong in some way.


Tesla has said they intend to pick the "better than human" fight at 5 billion miles, when they believe their advantage will be decisive rather than having an outcome that depends on how you count.


Okay so Musk loudly proclaims that buying a Tesla is the best financial investment on the planet, that buying anything else is pure insanity, on the basis of the car turning into a fully autonomous taxi via a future over the air update. Bold claim, no data provided.

Musk boasts that Teslas have the highest crash safety ratings ever, despite this being contradicted by the testing bodies (because they don't distinguish between cars with 5 star ratings). Another bold claim, contradicted by the data!

Teslas collect tons of data while driving and when crashing, which Tesla processes. You believe that this data demonstrates that FSD is safer than regular driving but that musk is too cautious to release it?? How can you square that with Musk's behavior generally?


Even then there'll be a distribution shift effect they need to account for. Not everyone is allowed into the FSD program, and the fact that they're self-selecting to use FSD additionally biases the roads and conditions where they choose to do so. Moreover, how you count is still very relevant if all the hairy events happen when a human operator takes control -- the car screwed up, but the resulting crash gets blamed on the operator who couldn't react to the ensuing shitshow in time, and even if total crashes/fatalities are lower when counting those events you still have the self-selection problem of perhaps those FSD operators in those conditions being better at recovery from car-induced errors than the general population in arbitrary conditions would be.

Like...maybe it's fine. But even at 0 fatal crashes in 5 billion miles they won't have a strong argument that a more widely deployed FSD would have similar success without a lot more data to explain how the thing they measured directly connects to the thing they'll want to claim it represents.


Thank you for demonstrating my point. This is exactly why they want a safety margin on their safety margin.


Then Tesla should stop making claims about safety until they are willing to back them up with evidence. The most likely scenario given the available information is that Musk is lying and Tesla's aren't as safe as the company would have us believe.


Tesla's claims from TFA are pretty cringe, but so are the calls to ban FSD on account of a horrible safety record that isn't actually horrible.


> account of a horrible safety record that isn't actually horrible.

That’s the problem. We don’t know the safety record because Tesla won’t release it.

But the anecdotal evidence we do have is very worrying. And the development model doesn’t inspire confidence.

So the calls to ban it seem very reasonable to me.


And yet “it’s safer than normal drivers” is the defense people have been using to defend AP/FSD for years.

But we don’t know that because Tesla won’t tell us.


So... until they reach that number, they'll publish misleading stats (i.e. outcomes that favor them depending on how they count)?


Many people take the view that you shouldn't introduce a radical change in how cars work until it is proven safe, rather than feeling it should be allowed on public roads until it is proven unsafe.


There’s a slight difference between alpha/beta testing software on public roads and and something like… IDK, can’t think of anything even comparable that’s happened in my lifetime, anti-lock brakes maybe? Which, incidentally, fail safe and don’t happily run into stationary objects on a regular basis.


It seems like it's impossible to prove that it's safe on public roads without actually using it on public roads, though? Wouldn't that be an impossible ask, by definition?


It is entirely possible to test things under controlled trials to simulate real-world conditions. To use an analogy, we don't just test new medicines by putting them on the shelves and waiting for consumers to report side effects. We have strenuous trials first to give us baseline confidence in the product.


I would argue that it's impossible to simulate real-world conditions for a task as complex as driving. There are too many edge cases and different combinations of environmental factors, many of which involve the inherent unpredictability of other drivers.

I would assume medicines have similar or higher levels of complexity involved. However, the medicine analogy is poor. We often release medicines for use even when they have incredibly harmful side-effects. We do the testing to quantify and understand those side-effects, not prove that each medicine does exactly what it should with no side effects.


Exactly. There's no better facsimilie for the sheer variety of public roads than, well, public roads. Tesla even has rigorous simulation in place for hostile situations that should never occur on public roads:

A good general overview from two minute papers: https://www.youtube.com/watch?v=6hkiTejoyms Tesla's AI Day 2022 video, timestamped around labeling and simulation https://youtu.be/ODSJsviD_SU?t=6037


I still see no reason why should a private company be allowed to risk the lives of those who never willingly participated in their study.

There are human trials for drugs as well, but those are only the latest phase, and with explicit permission.


Isn't the usage of Autopilot "willing participation"? Or are you referring to the other people on the road?

Wouldn't that also be true of your medicine analogy? If someone takes an experimental medicine but experiences an unknown side effect and passes out while driving, isn't the risk profile equivalent?

That hypothetical certainly is far less probable than the Autopilot case, but for any experiment that takes place outside controlled conditions, there can always be unknown and unforeseen consequences that affect people who have not given explicit consent to being part of said experiment.

To argue that we cannot test things unless all potentially-involved parties have given consent is to argue for an impossibility. There is always some small amount of risk of higher-order consequences.

I would even go further and say that arguing for such is an unnecessarily conservative approach that hamstrings any attempt at improving the world. See, for example, this commentary on the current state of IRBs [1]

[1] https://astralcodexten.substack.com/p/book-review-from-overs...


There are plenty of drugs after which you can’t drive.

Also, the “danger to others” aspect is even then negligible as opposed to some faulty software controlling a ton of steel/battery in a system that depends heavily on predictable behavior of each other.


Safe driving for a human also depends heavily on predictable behavior of others. If that's an important metric, it seems like the optimal solution is a single self-driving system with a monopoly on the market... that way each and every driver is predictable, correct?

I guess I don't understand your argument. We put faulty humans in control of a ton of steel/battery in the same system all the time; humans have the disadvantage that when lessons are learned, they aren't transferrable without great amounts of effort. Self-driving systems can share learnings across the whole fleet with software updates, which seems like a strictly better solution in the long term.

It seems like you're making perfect the enemy of "better".


Yes, and we have to mind the denominator too. Waymo just announced 1 million self driving miles, Tesla just announced 150 million self driving miles.


Waymo self driving is not at all the same as Tesla’s. One is fully driverless and the other requires driver monitoring.


This is correct but misleading. Waymo and Tesla are pursuing strategies so different that attempting to compare them is futile.

It's true that within the small geofences in which Waymo operates, their cars are dramatically less likely to come unstuck than a Tesla running FSD beta. On the other hand FSD beta drives on pretty much any public road and not a curated set of pre-driven, pre-mapped, pre-validated streets. Yes there are many geographies where it struggles; there's a cottage industry of people obsessively showing these on YouTube. There are also some areas of the country where FSD beta is utterly rock solid and people routinely drive substantial distances with zero interventions.

It's also not entirely accurate to describe Waymo cars as "fully driverless" as they have a team of human employees remotely monitoring the fleet and can remotely operate any vehicle.


> It's also not entirely accurate to describe Waymo cars as "fully driverless" as they have a team of human employees remotely monitoring the fleet and can remotely operate any vehicle.

This is absolutely not true. Waymo remote agents cannot operate a vehicle. All they can do is answer questions from the vehicle or plot a path to go around if it gets stuck. They cannot make any safety critical interventions. Waymo has talked about this many times.

It’s fully driverless because the vehicle is responsible for attaining minimal risk condition all on its own. This is how other L4 companies like Cruise works too.

Zoox has a good explanation on how their teleguidance works: https://youtube.com/watch?v=NKQHuutVx78


“Plot a path to go” is controlling the vehicle remotely.


Nope, that’s misleading. Still no “control” over the vehicle because the vehicle is executing the move or it can ignore the instructions. It’s assisting the vehicle. They don’t have the ability to joystick the car.


Your point is that the Waymo operator isn't really in control because the vehicle can ignore instructions; the on-board computer won't allow the remote operator's instructions to result in a collision. If I attempt to drive my own car into a brick wall, the on-board computer will defy my instructions as best it can too. By your definition, I'm not in control of my car just like Waymo operators aren't.

Fact is, a Waymo operator can take enough control of the vehicle to be considered in control of it. It doesn't matter if control comes in the form of a steering wheel or a joystick or drawing vectors on a map. Control is control, whether it's direct, indirect, or funnelled through a governor. Waymo operators can intervene and direct the car as needed.


> If I attempt to drive my own car into a brick wall, the on-board computer will defy my instructions as best it can too. By your definition, I'm not in control of my car just like Waymo operators aren't.

Terrible analogy. You are sitting in the driver's seat with your hands on the wheel. If your car is about to hit someone, you can prevent it because you are the driver. This is what FSD is — ultimately driver is in control. No such luxury for Waymo/Cruise as their remote operators cannot do any safety critical interventions like preventing crashes.

All they can do is non-safety critical interventions by way of passing indirect commands. A Waymo is in control of itself at all times and is fully responsible for its own safety. You can play with words like "control" all you want, but this is the definition of SAE Level 4 autonomy.

If anything, calling FSD "autonomous" is highly misleading because drivers are preventing mistakes and thereby hiding its real safety outcome.


> Terrible analogy.

I agree, it's a terrible analogy if I was attempting to make the argument you thought I was trying to make, per your explanation for why it's a terrible analogy.

My point is that it's invalid to say that Waymo operators don't have "control" of the vehicle because it can refuse to perform some tasks. Because my car can refuse to perform some tasks too.

The deeper point is that the difference between "in control" and "not in control" isn't binary. It's all points along a continuum. If a Waymo operator controls a vehicle remotely, it's not the same level of control as a human holding the steering wheel. But it's still control in the larger sense. If the operator wants the car to stop, it will stop. Period. If they want the car to perform some action that isn't stupid or dangerous (such as colliding with a pedestrian) it will perform that action.

> If anything, calling FSD "autonomous" is highly misleading

I agree. Which is why I didn't. In most situations FSD beta can handle the "full" task of driving. By "full", they mean it's not intended to be limited to a strict subset of task, e.g. only on highways. It can "fully self drive" the car competently at least ~99.9% of the time, as evidenced by the fact that it can routinely drive most roads reliably. Drives shown on YouTube are mostly owners intentionally looking for roads which flummox it, as that's how they show improvements as software updates are released.

The challenges which preclude FSD from being a system capable of autonomy — the 0.1% of remaining error — isn't errors in spatial awareness or control surface failure. It's stuff like path planning and road sign interpretation, neither of which involve sensors that Tesla vehicles lack.


Yes, but Tesla does have a driver monitoring, so that's the relevant safety number.

Nobody, even Tesla, is currently arguing that Tesla FSD is ready for driverless.


You just pointed out that the two driving assists are apples and oranges ... yet it was you who compared Waymo's driverless car total mileage vs Tesla's FSD.


We are talking about safety and risk, not dick measuring the tech stack. Tesla has 150x the miles, which means that for the same risk level they should see 150x the problems. Maybe divide that by 3 to account for highway vs around town. I don't see 50x the number of "Tesla killed my dog" stories as "Waymo killed my dog" stories, so Tesla probably isn't taking egregious risks. What's your heuristic?


Tesla has 0 self-driving miles, since a driver is always there and is fully responsible for any mistake that happens.


Waymo had 20 million self driving miles on public roads in 2021. Earlier this year, Waymo announced 1 million self driving miles on public roads with nobody in the driver's seat. Tesla has 0 such miles.


Thanks for clarifying! So Tesla has 7x the miles and so if their test program is riskier we should expect to see >7x the number of accidents.


No because when Tesla's system has a problem, as my car does a few times each trip, I grab the wheel. And waymo also has cars with safety drivers but also has run a million miles without one. Run Tesla FSD with no intervention for a million miles and it would have thousands of accidents. Thousands, where it was at fault. Waymo had 2 or 3 minor parking lot style no damage contacts in their million. Not thousands


We see far more than 7x at fault accidents. As far as I know, Waymo has had none. Tesla has had so many that it had to post a recall notice.


On motorways mostly, where the problem is almost trivial. If you add to it that it is much easier to accumulate miles there, Waymo’s number is much more impressive.

It’s a bit like stress testing two frameworks, one tested only for a day, but at the frontpage of HN, while the other was up for a months, with barely any clicks.


Where is the data showing that heroin is not good at curing COVID?


It’s rather telling that the reply saying “show me evidence” is being voted down while the assertions mixed with disdain for Musk are being voted up.

Look, I get that Musk isn’t popular here. But please, give me some hope for humanity and at least pretend to care about facts and evidence.


Our point of view is that we should not allow beta self driving software on the roads until it is proven safe. We do not know how safe it is, nor can we know because Tesla controls that data and is not transparent with it.


I totally understand where you are coming from. However, I don’t think it is possible to prove this type of technology is safe in the real world without lots of data of it being used in the real world. Requiring proof first like you suggest creates a catch-22 situation in which self driving technology can never be released.


The sensible method is for the manufacturer to perform lots of testing and make available the data. Legislators can then allow limited real world usage to determine if there's problems and then expand that usage as appropriate. e.g. allow autonomous vehicles to take control on motorways (sorry, that's a UK term - is it freeways in the U.S.?).


That's exactly what happened. There was a back and forth with NHTSA for years before the beta started going public, and the rollout was carefully staged. At first, you needed a high safety score and it was aggressively geofenced. Those restrictions were gradually lifted as they collected data, and it still has attentiveness monitoring and a strike system. Tesla and NHTSA go back and forth on details all the time -- just recently NHTSA demanded some adjustments for stop signs and yellow lights.


Automakers already use proving grounds. We can set a minimum standard for FSD before turning it loose with amateur drivers.


You can’t test self driving tech the same way you can test most safety technology. It is easy to test something like an airbag. You throw dummies in the seats and run the car into a wall. There is no equivalent for self driving tech. The tech needs to be able to handle any weird situation that humans and nature throw at it. You therefore need a huge number or interactions, billions of miles worth, to be confident in its safety. You can’t just put a car through a test track to see if it is safe. The range of situations the technology needs to be able to respond to is too large to fully test like this.

It is a perfectly valid policy to want that proof ahead of time. I’m just pointing out that this requirement would effectively kill development of this technology for the foreseeable future.


Google’s robocars seem to be taking a very conservative approach to integration with the motoring public.

Don’t get me wrong, I irrationally hate the robots, used to see them all the time around Phoenix, was actively gunning for them and only had one time I could have rammed one[0] with it not being my fault. Which says a lot because it was (well, still is) a life goal to take out a robocar so I was really trying.

I used to see them doing a bunch of really stupid stuff but never anything that was outright dangerous, relatively speaking. One did stop in the fast lane when everyone was zipping by at 70+ because it couldn’t figure out how to merge into the left hand exit ramp but plenty of snow birds do that too so it’s not completely unheard of.

[0] (un)fortunately I was behind on my rent and couldn’t afford to spend who knows how long dealing with an accident investigation so the robot got to go about its day unmolested.


Google's approach is more defensible. They are careful, default to conservative behavior and stop when confused, and had professional drivers for years. And even then they're a road hazard.

At least they hold liability if anything happens, though, which is an important step.


Is Tesla using customer cars for training data? That should be very noticeable, it is a lot of data that would be getting fed back. If they're not, then the billions of miles driven by customers isn't relevant, and the same "can't just put a car through a test track" problem exists at the training level.


Yes they are, yes it is very noticeable, and yes it is a lot of very important data that could not have been collected on a test track.


I'd like to see something to back that claim. The amounts of data we're talking about are quite significantly more than any I've seen people experience in real life. As far as I can tell, customer cars run inference only, collect very little data unless something prompts them to (and even then, store it locally), and contribute almost zero to the training set.


Here's a statement from 2022. They had a labeling team of 1500 people and they laid off about 200 after the labeling infrastructure team finished an auto-labeler, so it's not just a bloated team. Do you think they are keeping 1300 people busy correcting labels on purely internal data?

https://teslamotorsclub.com/tmc/threads/tesla-lays-off-over-...


Also, since parts of it is a blackbox, doesn’t any update literally clears out any former result and should be revalidated from scratch?


Should we allow teenagers to drive on the road before they have proven to be safe? We don’t know how safe they are.


No, we should require that they pass a driving test, pay for insurance, and register their vehicle with license plates. If I were emperor I would raise the driving age to 18 as well.

vote often, vote early


A person who was passed a driving test, paid for insurance and registered the vehicle is responsible for the car at all times, even when FSD beta is operating.

Come to Australia, the driving age here is 18. (And for your first three years as a licensed driver, you have to show plates to indicate your lack of experience because, unlike in the stupid USA, we don’t assume that passing one driving test is enough to qualify you as a “good driver”.)


No, and we don't. That's why driving instruction and driving tests exist.


Thirty thousand people die on US roads every year because of the actions of people who received driving instruction and passed a driving test.


Many (most?) accidents are due to a clearly in the wrong driver (drunk-driving, texting, etc), and the other parties have no way of avoiding that. How exactly would a few self-driving cars help here? They can’t prevent a collision when the other party goes over a red light at some insane speed. They could start breaking sooner, which is great, but that doesn’t need self driving, it is a feature of plenty of cars already.

Of course if every car would be self driving we wouldn’t have crazy drivers, and accidents would diminish, but so would if we simply didn’t have crazy drivers. Neither is feasible in the near future, and in the short term, humans are much better at handling tricky situations (like.. not emergency braking at the sight of a plastic bag in the air)


This is a better argument for improving training and testing than it is in lowering the bar and letting a buggy computer program drive.


It was only intended as an argument as a direct response to the post by tsimionescu. Tesla could focus on making FSD beta satisfy the driver test given to teenagers. It could perform the test as well as any 16-year-old and we'd all agree that it wouldn't really prove anything about the safety of FSD or the human teenager.

Which is my point — the road is full of hideously bad/dangerously inexperienced human drivers holding driver licenses. Given this, it's hard for me to interpret the pearl-clutching around humans co-driving with FSD beta as anything other than confected paranoia.


Do you really think there's more room to improve human nature than AI technology? I don't.

I think they both have a ceiling and that the ceiling on AI is much higher.


Yes? The largest causes of accidents are fairly well understood, if we decided it were worth the effort. Not coincidentally, this is why all the arguments for self-driving fall flat. Being as good as an average driver is a low bar. Being as good as a median driver is a harder task. But it's the latter that is the bar we have to pass if self-driving cars are ever to become mainstream.


Deaths = N*average, not N*median.


I was not trying to claim otherwise. But unless you are intending to make self-driving cars mandatory, N*average is meaningless. The median driver is who you actually need to convince, and they are not going to choose a self-driving car that is less safe than they are.


And ubiquitous FSD could easily make that number worse, not better.


I'd bet that Tesla FSD could pass most DMV driving tests in the US.


That'd be really interesting to see how well it does.

I wonder why there's no modified version of driving tests for AVs that each software iteration takes.


> Look, I get that Musk isn’t popular here. But please, give me some hope for humanity and at least pretend to care about facts and evidence.

The onus is on Tesla to provide honest and accurate data, and they have shown time and time again that they are not capable of doing so.


> Tesla’s number give a very incorrect impression — so incorrect that it is baffling why they publish them when this has been pointed out many times by many writers and researchers.

It isn't baffling, it's completely in character. Bullshitting is standard operating procedure for Tesla. It's simply the nature of the company.

For example, Musk recently proclaimed "FSD soon!": https://insideevs.com/news/663396/elon-musk-tesla-fsd-potent...

And that's not new: https://jalopnik.com/elon-musk-promises-full-self-driving-ne...

In 2016 Tesla claimed that all Tesla cars being produced "have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver". That was a lie too: https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

Tesla will keep lying until the lies stop working, but there do indeed appear to be suckers born every minute.


Predicting when an algorithmic problem will be solved is a fools game. I don’t think Musk was intending to deceive with FSD, he’s just wildly over optimistic and partially blinded by the financial upside.

That being said, just because they have repeatedly failed to predict when “FSD” will be solved, doesn’t mean they aren’t making progress. I’ve been very happy with their latest v11 release, it’s drives so freaking smoothly on the highway.


You can’t be “wildly over optimistic” for 8 years in a row. That’s just called lying and deliberately misleading, especially when financial upside is involved.


I’d agree if it was a grift if Tesla was sitting on their hands and not making progress.

Eight years really isn’t that long when it comes to solving hard problems. Tesla has a long attention span, and personally I think that’s a virtue which significantly increases their odds of success.


They are collecting up to $15k/car for 8 years selling software bound to cars that likely will never be able to FSD in their useful lifespans. So yes it's a grift.

If this was a free beta, or a $1000 option, sure. But this thing is a double-digit % of price add on to these cars. It is being aggressively marketed & sold for years while it can't actually deliver on those promises.


> Eight years really isn’t that long when it comes to solving hard problems.

I agree. But it seems Tesla and Musk don’t. Because Musk said in 2016 that autonomy is a “solved problem” and has been saying every year that they will have FSD by the end of the year.


Yes, Musk said that in 2016.

In 2016, Uber predicted it was just a few years away.[0]

In 2012, Google predicted driverless cars for all by 2017.[1]

In 2015, BMW predicted we'd have driverless cars by 2020.[2]

In 2016, industry analysts and self driving experts predicted 2019.[3]

[0] https://www.businessinsider.com/travis-kalanick-interview-on...

[1] https://www.theregister.com/2012/09/25/google_automatic_cars...

[2] https://www.theguardian.com/technology/2015/sep/13/self-driv...

[3] https://www.businessinsider.com/report-10-million-self-drivi...


The difference is they didn’t take money from customers to deliver a promise at a future date.


Companies make claims about the future for many reasons, not all of which involve direct cash payment from customers.

BMW wanted press coverage that showed them as on the pulse of advanced technology. Perception influences brand associations and car purchase decisions.


> Companies make claims about the future for many reasons

Here is Tesla lying about the FSD capabilities of the cars they were putting on the road in 2016: https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

Show me a similar lie from another car company about their production vehicles.


It's not the same as repeatedly promising delivery of the feature — one of the hardest technological problems being worked on — year after year, while continuing to take money from customers for said feature. Not to mention gaining a ton of positive coverage for the company that enables sales and stock appreciation.


In 1966, MIT predicted that computer vision was a summer intern sized project, but it actually took an entire academic field and 50 years.

[1] https://dspace.mit.edu/handle/1721.1/6125


Umm, no.

They had a project to use summer workers on different tasks in a “reasonably coordinated” way.

And to produce a “simple system” useful for further work.

Funny what you learn if you actually read the links you post… I know, bad form to suggest people don’t read the links they post in their astroturfing sessions.


> I don’t think Musk was intending to deceive with FSD

If it were an outlier event, sure. But his entire motive seems to be to deceive; he's pathologically disingenuous. He has not earned the benefit of the doubt.


> I don’t think Musk was intending to deceive with FSD, he’s just wildly over optimistic

When he does it year after year after year it's no longer "optimism", it's just bullshit and lies.


Lots of things here are no longer true. For example FSD does work on the highway now not just city streets. Also note this is written on Fobes’s contributor aka blog platform also known for pumping crypto schemes and other nonsense points of view under the banner of “opinion” pieces from people who have no affiliation with Forbes.

I’d guess we are seeing this here today because someone decided it was a good day to FUD the company so they can make a buck shorting the stock.

Bottom line, Teslas are incredible cars and most people that have bad things to say about them seem to never have owned one.


> Lots of things here are no longer true. For example FSD does work on the highway now not just city streets.

FSD on highways has only been released recently. This report is as of Q4 2022. In fact, Tesla themselves say “FSD beta engaged in mostly non-highway miles” in the report.

> I’d guess we are seeing this here today because someone decided it was a good day to FUD the company so they can make a buck shorting the stock.

Ah, FUD and shorting. A classic and predictable response to anything negative about Tesla.


Both can be true.


The Tesla investing community on Twitter has a lot of zealots that practically worship Elon and launch vitriolic crusade against even fellow "bears" who make minor criticisms at their god-emperor, his companies, or their products.


Hang on the main complaint in this article is that Tesla is providing some road safety statistics that to a normie look surprisingly good, but which on closer inspection rely on comparisons that are statistically misleading and involve some biases should have been controlled for.

You've not really engaged with that and have instead

- speculated that the author is spreading FUD because they're short Tesla

- decided that the report is wrong because it was written on a Forbes platform you say is for crypto scams

- elsewhere used anecdata to refute it - "I own one myself and know several people who have gotten one more recently and they all love them and have had zero issues"

With all due respect ... this is fanboi behaviour.


He makes a lot of claims he doesn’t substantiate or source in the course of his argument about his issue with the way the stats are reported.

“It’s also true that drivers are more likely to engage Autopilot in easy driving conditions — clear roads with no construction and nice weather.”

Is it? That doesn’t jive with my personal experience and how I use it.

He also claims that if users don’t intervene FSD will surely crash. This also strikes me as not really true most take overs in my experience are because the car is being overlying cautious or annoying in some way that isn’t dangerous.

He claims that people that buy Teslas are old and wealthy which skews the data since they are naturally safer drivers but doesn’t cite any source showing that is true. Teslas cost about the average of a new US vehicle. Another thing that was maybe true 5-10 years ago but no longer.

If he had wanted to write a neutral point of view article about how he would like to see the self driving industry report safely stats that would be worthwhile but this just seems to be ragging on what one company is doing when everyone else doesn’t report this stuff at all and there isn’t an acceptable industry standard yet. Like do we have statistics from GM’s Cruise self driving unit, or Google Waymo?

My comments about the car being great were more directed at the others in this thread ragging them.


> If he had wanted to write a neutral point of view article about how he would like to see the self driving industry report safely stats that would be worthwhile but this just seems to be ragging on what one company is doing when everyone else doesn’t report this stuff at all and there isn’t an acceptable industry standard yet. Like do we have statistics from GM’s Cruise self driving unit, or Google Waymo?

Ugh, completely wrong. Have you heard of CA DMV disengagement/crash reports? Waymo and Cruise provide detailed reports of incidents to regulatory authorities. In fact, Waymo goes out of its way to publish detailed safety data about their operations: https://waymo.com/safety/


I don’t see any stats on accidents per mile or any other thing like that just a bunch of PR about their testing methodology. Individual accident reports filed with the DMV don’t tell you anything without knowing the denominator


https://storage.googleapis.com/waymo-uploads/files/documents...

https://storage.googleapis.com/waymo-uploads/files/documents...

https://storage.googleapis.com/sdc-prod/v1/safety-report/Way...

CA DMV releases annual mileage reports where you can compute disengagement per mile and accidents per mile. The companies file those reports with the DMV (guess the one company that doesn't do it). Look it up, it's public. And guess who analyses those reports every year? The author you're attacking in this thread.


Criticism isn’t an, “attack”


Sure. I guess your way of "criticizing" is to accuse the author of spreading FUD and shorting the stock, while questioning his credentials by calling him a "quasi expert shilling a point of view". Meanwhile, all your points of rebuttal are provably false.


He is a consultant I doubt he is shorting the stock nor did I say he was. I do think it’s likely he is being paid to promote this viewpoint though.


Only ones unbiased and unpaid are people who hold positive views on Tesla! Just like you. Or are you being paid to defend Tesla on HN?


I have owned a model 3, and I loved it, and I agree with almost every negative take on the FSD scam.


Same, had one for 4 years. Mostly a great car, advanced software, fast, good range, amazing charging network, etc.

But the FSD stuff is borderline scam. EAP itself barely improved in my 4 years of ownership, and the sporadic phantom braking incidents could make it unusable on some software versions/roads/conditions/days/random.


Yeah, can't use cruise control in a Tesla, because they insist on trying to vision only and not provide a normal option. So you can't use it because it stabs the brakes in random moments.

To anyone about to reply that cruise control never phantom brakes for them, it will. Just keep using it.


Yup. The problem is similar to LLM hallucinations.

It does not make predictable errors (slow response for older driver / aggressive driving by young male driver / etc). It's a lot of random error no healthy human would make.

It's not even consistent on routes.. you might find it to be a huge relief to use on your commute and then one day it almost kills you on its 47th route on this road.

It used to lose its mind on HOV lane mergers while driving in HOV lane like 1 out of 3 times.

It used to hard brake for a Burger King roadside sign 20 feet in the air because perceptually its vision system read this as a stop sign.

It used to slam the brakes and cut the speed 30mph because it suddenly thought it was on the local road under the highway I was driving.

It used to phantom brake for highway overpass shadows in high midday sun. They slowly fixed that, but then introduced some imperceptible conditions that still triggered phantom braking in other lighting.

It allegedly avoids objects, and would brake overly hard for someone merging in front multiple car lengths away. Meanwhile very close cut-ins would often get missed, and I had 3 particularly hair raising incidents over the life of the car in which I was cutoff at highway speed and the car just.. did nothing. I had to brake so hard the car lost all traction.


> To anyone about to reply that cruise control never phantom brakes for them, it will. Just keep using it.

I have to think these people are just fans with deliberate blinders. It doesn't take much time at all to get a phantom braking incident. You can minimize by only using TACC in light traffic with a lead car, but there will still be moments it sees something that looks too much like a car coming at you and it'll stab the brakes. In my experience, it happens probably every 100-200 miles of freeway driving. I'm getting pretty good at predicting in advance what scene in front of the car is likely to freak out the car. And I drive with my foot resting on the accelerator ready to counteract the braking as quickly as possible.

I'd really like the option to disable TACC and have a regular cruise control. We know the software supports that, because non-AP cars have it. Just make it available to all of us.


I was definitely seeing PB incidents closer to every 50-100mi, than 100-200mi.. but yes they are frequent enough in some road conditions its hard to believe people have never seen one.

Further - I think it promotes bad tradeoffs that reduce safety.

Like you, I sometimes ended up cruising with my foot over the accelerator to protect from being rear ended due to PB.

This is exactly the opposite of driver instruction and what is logically safer. If we had to choose where to rest a foot in anticipation, the brake is the better place to be.


> Bottom line, Teslas are incredible cars and most people that have bad things to say about them seem to never have owned one.

You can't argue about the stream of lies about capabilities and roadmap on Tesla. And everyone I know that owns one still complains about build quality, customer service and actual range. The only good thing I hear constantly is drive train.


I own one myself and know several people who have gotten one more recently and they all love them and have had zero issues with any of those things.

Personally have taken my Model 3 on many long road trips and it’s great. Have the option to use our gas SUV instead but wouldn’t even consider that since it doesn’t have Autopilot/FSD

Unlikely to consider any but a Tesla when the time comes to replace our gas SUV.


Honestly, it's not really build quality and range that are my primary complaints about my Model 3. It's windshield wipers, phantom braking, and awful voice recognition. Those are the ones that annoy me most.

Also the heated steering wheel is like a binary switch, the vinyl is cheap and slippery (really noticeable on the steering wheel), road noise is loud, seats are just meh, phone-based access sometimes spotty, have to reboot once a week because music streaming hangs, and the headlights are an older matrix technology that has a mediocre pattern without even supporting the actual intended functionality of a matrix headlight.

But those are relatively minor in the grand scheme. Lots of cars have little annoyances like that. On balance I'm satisfied with the purchase, but I think it's unlikely that I'll buy another if there's an alternative. My next EV won't be a compromise, it'll have the performance -and- a nice interior with good seats.


Have you checked out any other of the EVs? I hear similar complaints about the interior quality (material, reboots, road noise, hatch stops not properly aligned causing weird ear pain [1] etc...) of the Tesla's The headlight information is new to me. I started to look at the Rivian R1S and compare it to a Model Y. Its ~40% more expensive but seems better finish, not sure about the range of the R1S but the charging network seems less than ideal if you can't do it at home overnight. They keep dropping the price of the Model Y and its still eligible at the moment for the 7500 tax credit. Makes it around $44k Telsa's seem to be about a 40,000 car so maybe worth it?

[1] https://teslamotorsclub.com/tmc/threads/ear-pain-pressure-he...


This isn't responding to you but rather tagging along on the part you quoted.

I don't understand this perspective that I must own something to have a negative opinion or sentiment about something. Are we supposed to purchase cars starting at $40,000 just so we are allowed to critique them? Does it not make sense that someone hasn't purchased such a vehicle and has a negative opinion on it because they have done their research and discovered the negatives?


You don’t have to have owned one but I don’t see how you can have a strongly held opinion on a product you haven’t at least experienced.


Let's be honest. The majority of the cheerleading, at least in the past, was coming from non-owners. So it's fair to hear criticism from non-owners too. Can't tell you how many times I had someone tell me Tesla was perfect and all my experiences were invalid, when they didn't actually own a Tesla while I've owned two of them.


Hasn’t been my experience. Most people I see speak positively about them were owners.


I owned a model 3 and I had no complaints about build quality or actual range. Customer service was not great but similar to any other car dealer.


Teslas can be incredible cars and FSD can be a scam at the same time.


Forbes generally is bad but Brad Templeton is a solid commentator on AVs for a long time.


Someone should tell him to get off Forbes.


Why don't you try addressing the content?


It is a reasonable concern. That domain offers negative weight in the relative "truthiness" of reporting, since it is now open to contributors of very wide skill, variety, and reporting rigor. The Forbes name used to matter, but it's contributor model drags the perception of brand accuracy down.


Brad Templeton is a credible author in the self driving space. It seems like the Forbes concern is just a convenient excuse to dismiss content that doesn't agree with the GP's view points.


I did up thread.


> Bottom line, Teslas are incredible cars and most people that have bad things to say about them seem to never have owned one.

I get that the reality distortion field is strong with Musk, but this is especially silly. People generally try to avoid buying products that they think are bad.

Also it's not like Teslas are some exclusive product these days. Most people in tech probably know at least one person with one.

For me personally, experiencing Tesla's self driving made me even less interested in owning one.


It's written by Brad Templeton, a professional writer and consultant on self-driving vehicles since 2007. He owns 21 patents on self-driving vehicle technology.


> He owns 21 patents on self-driving vehicle technology.

Then it might make sense for him to declare how much he gets from the patent licenses or if Tesla itself licenses any of the patents, because those are possible conflicts of interest.


Plenty of quasi experts make a living shilling a point of view. I’ve just seen so much trash published under the Forbes “contributor” by line that I automatically distrust it at this point.


If you are going to do an ad hominem attack, at least cite some specifics about this particular author. Otherwise, this adds nothing to the conversation.


The parent was an appeal to authority.

My comment wasn’t so much about this particular author as it was a comment on the general motivations of people who write on the Forbes contributor platform.

In this case the author bills themselves as a consultant on self driving cars so given the venue of publication one has to wonder whether they were paid to write this. Writing under the Forbes contributor by line is an extremely questionable choice in any case.


I feel the same about Forbes, but I really like this author. I think it hurts his credibility to publish his content there.


This comment really belongs on some other type of forum, not this one.


> Bottom line, Teslas are incredible cars and most people that have bad things to say about them seem to never have owned one.

The author owns a Tesla and likes it a lot.


If you've spent 5 digits amount of money on something, it's easier to say "Oh this thing does X, I like it!" and dismiss all the faults as "little things" rather than say "Oh this thing sucks, I've been duped, I'm a fool!".

Not sure why I'm saying this, I'd tell GP to check his bias, but he'll probably conclude, "Nah, I'm clever enough to be objective about this!"... we all think we're clever enough, and try to protect that idea in our heads.


> For example FSD does work on the highway now not just city streets.

Adaptive cruise control, as it should really be called, works with other manufacturers for years now. Tesla is FUD-ing themselves with false advertising as well as people buying into it


FWIW, adaptive cruise is part of autopilot (along with lane keeping), not FSD. FSD is intended to drive the car in all situations, way beyond lane keeping. AP is included on all Teslas now, FSD is a very expensive option.


Bottom line - Forbe's a terrible source for any credible information (Tesla related or not).


Downvote or not i stand firm Forbes is a trash media asset that you can easily pay your way onto. Its existence is a testament to our vanity of top wealthiest individuals list.


Wow https://en.wikipedia.org/wiki/Brad_Templeton

That's a name I had not heard for a very long time.


You make me sound like Obiwan Kenobi.


The article could be summarize as "Tesla used airbag deployment rate as accident rate. Airbags are mostly deployed in highway accidents so this is not a good metric. Tesla should release the raw data. FSD is bad."

So many words for nothing we don't already know about.

Also a bit ironic that the picture the author chose for highlighting FSD crashes didn't happen on a freeway.


If you read the press yesterday you would see that the large majority of articles took Tesla's report at face value, so it needed much more explaining than you imagine.


The big deal about FSD is that unlike autopilot, it isn't restricted to freeways, so the picture makes sense. Your summary is missing many important details.


It is already known that Tesla FSD (Fools Self Driving) is a scam. [0]

The robotaxi claims are as empty as finding life on the moon as they are no where to be seen and Tesla still increasing the price of this snake oil feature to unsophisticated customers as a Level 5 FSD autonomous system.

This system is begging to be investigated by the regulators urgently.

[0] https://news.ycombinator.com/item?id=34853615


After seeing ChatGPT and others hallucinate and make mistakes I was wondering if self-driving AI is inherently susceptible to similar behavior. Like hallucinating objects on the road and ignoring signs etc. If we can not fully understand these types of behavior for LLMs at a fundamental level, is the self-driving AI any different.

Of course, it may be that the LLM architecture makes them susceptible for this, but not self-driving AI. But the question still remains, do we understand enough about it to trust human lives to the AI?


Tesla will obviously want to make their product appear as safe as possible. Detractors will obviously want to make Tesla’s product appear as dangerous as possible. As long as they’re not outright lying, I don’t begrudge either side for obeying their incentives. The 10,000ft picture is that drivers using FSD seem roughly as safe as drivers without it.


Oh wow it’s that Brad Templeton. Cool.


> Tesla — time to accept this gauntlet. Provide real data, comparing the same thing in the same situation.

Do other manufacturers provide the granular data being requested here? This feels like someone looking for a reason to single Tesla out, for obvious reasons: Tesla headlines get clicks.


Other fsd companies don’t give their beta software to drivers and say “lol let’s find out what happens”

Jeeze is this so hard to understand?

I live in pac heights where waze and others are fully driverless now and I am less scared of them than a Tesla with a driver in it today.


Yes. And the problem is that a lot of Tesla pumpers hold two contradictory positions in the forums, along the lines of:

* It's perfectly safe, there's a human in charge

* Any crazy stuff the human prevents it from doing, the human intervened too early so we don't really know that the car was going to follow through on the crazy thing

It's too "uncanny valley" of how much rope you are supposed to give FSD.

All crashes are the humans fault.

The low number of crashes is purely FSD and has nothing to do with the driver heroics of preventing it from doing a dozen stupid things per drive.


Wait isn't that exactly what's happening where you live? Cruise and Waymo are putting passengers in cars without drivers in the driver seat, and there is no shortage of well-reported situations where those cars have caused issues in the city—blocking roads, crashing into buses, etc.

(Also, I don't believe Waze is operating self driving vehicles, but I am charitably understanding that you likely mean Waymo & Cruise.)


Cruise, Waymo, and now Mercedes offer some level of self-driving where they have the liability. Tesla has no legitimate claim to any kind of FSD at all until they pick up the liability while it's in control. Zero.


Except, they work.


The author lost credibility due to his bio: I worked on Google's car team in its early years and am an advisor and/or investor for car OEMs and many of the top startups in robocars, sensors, delivery robots and even some flying cars. Plus AR/VR and software.


That means he has a possible conflict of interest, but a possible conflict of interest does not, by itself, destroy credibility. Credibility is lost when you say something that's wrong, not when you have a possible conflict.


Oh no, he has actual experience in the field he is commenting on! This means he knows a lot more about this topic than most of us.


Article from 2020. There are something like 2.5x more Teslas on the road now than there were then. Needless to say the promised safety evidence never arrived. As far as anyone can tell then, and can tell now, the cars are in fact very safe.

This is all so tiresome at this point.


Article is from today, not 2020. Tesla released new statistics.


> Article from 2020.

So.. an article from 2020 references the quarterly safety report for Q4/2022?

You might want to check your dates.


> Apr 26, 2023,08:00am EDT




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: