Transportation Cruise ‘Recalls’ Robotaxis After Crash, But The Recall Is The Wrong Mechanism Brad Templeton Senior Contributor Opinions expressed by Forbes Contributors are their own. I cover robocar technology & previously worked on Google’s car team. Following New! Follow this author to stay notified about their latest stories.
Got it! Sep 14, 2022, 02:13pm EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin A Cruise robotaxi picks up a passenger at night, which is when they operate Cruise In June, General Motor’s “Cruise” unit, which is operating unmanned robotaxis at night in San Francisco, and about to expand service, had an accident with minor injuries when it stopped in the middle of making a left turn, and a speeding oncoming car hit it on the side. This is interesting both in terms of what went wrong in the accident, but also in the way the software fix was termed a “Recall” at the request of the National Highway Transportation Safety Agency. Cruise announced this week they would be expanding their service in San Francisco and also opening up on Phoenix and Austin by the end of 2022.
It is likely this is the first crash involving no safety driver, some fault for the driving system, and injuries to 3rd parties (both passengers and those in the other vehicle. ) New details have recently been revealed on the event. About the Accident Cruise’s vehicle did make a mistake, but the other vehicle was found, according to Cruise, to be “most at fault” because it was going 40 in a 25 zone, and because it was approaching in the right turn lane, but instead of turning right switched to the through lane and went through the intersection at speed.
Some details can be found in the CA DMV report . No citations were issued but the event may still be under investigation. Reconstructions of the event suggest the Cruise car was hoping to make a left turn — the famous “unprotected left” that many teams have found a challenge.
A Prius was approaching in a lane that requires a left turn, except for buses and taxis, and the Cruise car presumed that car was indeed going to turn right and that it could make its turn first before the Prius got there. The Prius was not a taxi, but it is unknown if the Cruise was sure of that or based decisions on that. It seems likely the Cruise predicted the car would slow, leaving it time to make the turn, and began its turn.
MORE FOR YOU Tesla Challenger Polestar Powers Up With Nasdaq Listing Plan Valuing It At $20 Billion Driver Killed By His Own Car Door While Waiting In Line At Fast-Food Drive-Thru, Providing Cautionary Insights For AI Self-Driving Cars Tesla Cofounder’s Recycling Startup Plans To Become EV Battery Material Powerhouse The Prius did not slow, so Cruise’s prediction engine judged that if the Cruise car continued its turn and the fast Prius also turned, the Prius would run into the Cruise. So the Cruise did what it felt was the conservative right thing, which was to hard stop in the intersection, which would allow the Prius to make its right. The Prius did not do that.
In fact, it moved back into the through lane and continued into the intersection. One might presume it always intended to go straight and now may have been trying to avoid the car stopped in the intersection. It did not, and hit the rear of the Cruise vehicle, causing minor injuries in both cars.
Cruise admitted their software acted incorrectly in a filing with NHTSA , though blaming the accident on the Prius for its high speed and being in the wrong lane. The Cruise Bolt could have prevented the accident either by not stopping and completing its turn, or by not attempting its turn in the first place. Most robocars are conservative and not quite up for avoiding accidents by speeding up, as that can lead to other problems.
Cruise declined to answer most questions about this event beyond what is in their filings. As the filing says the car was in “driverless autonomous mode” and cruise confirms this means there was no safety driver aboard. Police reports say there were 3 passengers (riding for free) in the back of the Bolt, and one went briefly to hospital with minor injuries.
There were two in the Prius who were treated at the scene. As this passenger has not come forward to the press, it may possibly be that Cruise offered inducements to this person to not do so. Cruise does not require NDAs from riders.
Cruise stated this problem was very rare, and happened only this once in over 120,000 unprotected left turns. (But see below to figure if that’s good. ) 3 days later, they released a software update which they believes fixes the problem.
They admit that it is their duty to do more to avoid a crash, even when other road users are behaving badly, and that is a good philosophy. Most self-driving teams try to create situations with erratic other drivers in simulator and try to find every variation of them they can. This situation is surely in Cruise’s simulator now, and also in the simulators of every other major team that read the news if they didn’t already have it.
Aerial view of intersection. Cruise car was in left turn lane, Prius was approaching in red striped . .
. [+] bus/taxi only lane where cars must turn right. Google/Maxaar About the Recall Curiously, Cruise described this particular software update as a “recall” and filed an actual recall notice with NHTSA.
Cruise and all the other teams operating are constantly doing software updates to fix problems, though of course it is very rare to be fixing a problem that caused an accident — fortunately. They are not putting this through the recall mechanism. The recall mechanism seems a poor fit for many reasons: Cruise did not actually recall vehicles, ie.
ask them to be returned to a service center to get fixed. It just used its regular over the air software update process. GM/Cruise never sold these vehicles, it owns and operates them, so there can’t really be recalling them from the customer.
The recall process is involved and bureaucratic, and definitely can’t be used for every software update, even updates that fix a safety problem. Almost all software updates fix some safety problem or another, just not one that actually caused (rather than could cause) a crash. It does make sense that these accidents and their fixes be reported, and indeed this accident was reported to the California DMV, the police, and NHTSA long before the recall.
NHTSA has requested or accepted recalls for some other software changes, and it needs to revise and streamline this process. While NHTSA has authority to regulate the safety of cars sold in the USA, it is less clear what authority they have over cars that are not sold. It is worth noting that just the day before this accident, Cruise was granted permission — but had not yet begun — to sell rides in its cars, and it could make sense to regulate cars in which rides are sold (though this may be more a state matter if they don’t cross state lines.
) Previously, when Pony. AI had an accident in one of their unmanned vehicles, the DMV pulled their permit to operate in that state. In Pony’s case, it was a solo car crash, clearly the fault of their vehicle.
The different circumstances may have resulted in different action or inaction by the DMV. Robocar accidents are different This crash tells us something about the different pattern of accidents in robocars. Initial reactions show that Cruise may be less mature than is desired.
Waymo had an accident where it was a fault in 2016, 7 years into their project, though without injuries. They have had very little since, though they have only recently done heavy driving in a territory like San Francisco. Cruise has had a bunch of embarrassing bugs of late, including an unusual police stop, groups of cars stopping due to a server communication bug, an incident with a fire truck, complaints about doing pick-up/drop-off in the middle of the street, and freezing at the start of a ride for a reporter for the Today Show.
This is more than we’ve heard about with Waymo and others, and as such we hope these are just teething pains. Every team will have snafus, and in fact every team will have crashes, and what matters is the frequency at which they happen. Cruise stated on Sept 12 that they have done about 300,000 miles of robotaxi service with no safety driver in San Francisco, and they would have done less back in June.
That’s not great, because human drivers tend to have an injury accident for every million miles of driving. While Cruise may be judged as having less fault than the Prius driver, this is not as good a record as we would like. (An accident after, say 250,000 miles does not mean they won’t do a million miles before the next one, but it’s not a great omen.
) On the other hand, they’ve done 4 million miles in general (with safety drivers. Because the safety drivers intervened with any problems in the 4 million miles, we don’t know how what the real accident rate was for that system — which was also an older and inferior system compared to the latest one. By comparison, safety driving is do good that Tesla TSLA reports its customers have safety driven 30 million miles with the very poor quality Tesla FSD and no report of any serious accident, particularly injuries, has emerged.
The unique aspect of robots, however, was noted above. Cruise fixed this problem in 3 days and it won’t happen again, at least not this way. The other companies have probably put this in their simulators and it won’t happen to them either.
Companies build massive libraries of simulation scenarios (and even, in a project I helped instigate, trade them. ) When they do this they have algorithms to “fuzz” the scenario, which means tweaking all the parameters in different ways. They will try this with the other car at varying speeds, or changing lanes at different times.
They will try it with the Cruise car acting differently. They will try it with pedestrians doing different things at the intersection to test that no problem occurs in thousands of variations. This is not like people.
If a person had this sort of accident — and they do — the lessons of that person will not teach any of the others. At best over time the city might improve rules on the lanes or speed but that will take many incidents. For whatever mistakes robots make, they will generally only get better.
People get worse. While most people expect that young drivers are the wildest and get in the most accidents, it is 80 year old drivers who actually kill themselves the most — the graph has a tragic “U” shape. This is in part because the older drivers are more frail, but it’s also been found they get into more accidents as their faculties diminish.
In particular, and bizarrely, they get into more accidents where they are the car hit, rather than the car doing the hitting, though it is still their fault, because of situations like this in unprotected left turns. Robots should be excellent at judging the physics of these situations, but Cruise’s system made too many assumptions. This is also a classic example of the prediction problem.
While we talk about sensors all the time in the robocar world, sensing is not the goal, it’s a means towards the real goal which is prediction. It doesn’t matter where everything is now, what matters is where it will be in the near future. Cruise both misjudged where the Prius might go, and also what to do about it when the situation changed.
I suspect Cruise’s car is not programmed to do things like race forward to avoid the accident, or even to back up. We don’t have data on whether there was a car behind the Cruise Bolt, but one thing robots do is look in all directions at once, and if it was clear behind, the car could have acted far faster than a human, who would have to check the mirror and get to the gearshift. An electric car and accelerate very quickly and in time, robotic EVs should be very nimble at avoiding accidents — if they dare.
I say if they dare because most companies are being very conservative. They don’t want to do sudden moves that could make a situation worse, even with 360 degree vision. In particular, many sudden moves of this sort are technically illegal, and they don’t want to deliberately break the law, especially if it might go wrong.
For example, pedestrians can come out from behind things by surprise at any time. If they jump in front of you, it’s not your fault, but if you are on the sidewalk to avoid an accident, it’s another story. (No, the cars will never try to choose who to hit if they have to pick between two people, that’s a common myth and annoying question, not something that actually happens .
) In time, robots should get better than humans at this situation. They will get better at predicting the range of things other cars will do. The car should have been constantly saying, “what will I do if this guy doesn’t turn like the rules require” and make sure there is an action that can be taken — including speeding up in the turn or backing up.
If there is no action that could work, and the risk has a high enough probability, the car would wait, but ideally there will be a possible action. It is necessary to tolerate some risk of accident when others act in an erratic way. Defensive driving is good but a totally defensive driver will block the roads with caution, which doesn’t get the problem solved.
Read/leave comments here Follow me on Twitter or LinkedIn . Check out my website . Brad Templeton Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/bradtempleton/2022/09/14/cruise-recalls-robotaxis-after-crash-but-the-recall-is-the-wrong-mechanism/