I saw the Tesla Robotaxi:
- Drive into oncoming traffic, getting honked at in the process.
- Signal a turn and then go straight at a stop sign with turn signal on.
- Park in a fire lane to drop off the passenger.
And that was in a single 22 minute ride. Not great performance at all.
Imagine you’re the guy who invented SawStop, the table saw that can detect fingers touching the saw blade and immediately bury the blade in an aluminum block to avoid cutting off someone’s finger. Your system took a lot of R&D, it’s expensive, requires a custom table saw with specialized internal parts so it’s much more expensive than a normal table saw, but it works, and it works well. You’ve now got it down that someone can go full-speed into the blade and most likely not even get the smallest cut. Every time the device activates, it’s a finger saved. Yeah, it’s a bit expensive to own. And, because of the safety mechanism, every time it activates you need to buy a few new parts which aren’t cheap. But, an activation means you avoided having a finger cut off, so good deal! You start selling these devices and while it’s not replacing every table saw sold, it’s slowly being something that people consider when buying.
Meanwhile, some dude out of Silicon Valley hears about this, and hacks up a system that just uses a $30 webcam, an AI model that detects fingers (trained exclusively on pudgy white fingers of Silicon Valley executives) and a pinball flipper attached to a rubber brake that slows the blade to a stop within a second when the AI model sees a finger in danger.
This new device, the, “Finger Saver” doesn’t work very well at all. In demos with a hotdog, sometimes the hotdog is sawed in half. Sometimes the saw blade goes flying out of the machine into the audience. After a while, the company has the demo down so that when they do it in extremely controlled conditions, it does stop the hotdog from being sawed in half, but it does take a good few chunks out of it before the blade fully stops. It doesn’t work at all with black fingers, but the Finger Saver company will sell you some cream-coloured paint that you can paint your finger with before using it if your finger isn’t the right shade.
Now, imagine if the media just referred to these two devices interchangeably as “finger saving devices”. Imagine if the Finger Saver company heavily promoted their things and got them installed in workshops in high schools, telling the shop teachers that students are now 100% safe from injuries while using the table saw, so they can just throw out all safety equipment. When, inevitably, someone gets a serious wound while using a “Finger Saver” the media goes on a rant about whether you can really trust “finger saving devices” at all.
Anyhow, this is a rant about Waymo vs. Tesla.
some dude out of Silicon Valley hears about this, and hacks up a system that just uses a $30 webcam, an AI model that detects fingers (trained exclusively on pudgy white fingers of Silicon Valley executives)
Hotdog / not hotdog
Waymo is also a silicon valley AI project to put transit workers out of work. It’s another project to get AI money and destroy labor rights. At least it kind of works isn’t exactly helping my opinion of it. Transit is incredibly underfunded and misregulated in California/the USA and robotaxis are a criminal misinvestment in resources.
a silicon valley AI project to put transit workers out of work
Silicon valley doesn’t have objectives like “putting transit workers out of work”. They only care about growth and profit.
In this case, the potential for growth is replacing every driver, not merely targeting transit workers. If they can do that, it would mean millions fewer cars on the road, and millions fewer cars being produced. Great for the environment, but yeah, some people might lose their jobs. But, other new jobs might be created.
The original car boom also destroyed all kinds of jobs. Farriers, stable hands, grooms, riding instructors, equine veterinarians, horse trainers, etc. But, should we have held back technology so those jobs were all around today? We’d still have streets absolutely covered in horse poop, and horses regularly dying in the street, along with all the resulting disease. Would that be a better world? I don’t think so.
It’s another project to get AI money and destroy labor rights.
Waymo obviously uses a form of AI, but they’ve been around a lot longer than the current AI / LLM boom. It’s 16 years old as a Google project, 21 years old if you consider the original Stanford team. As for destroying labour rights, sure, every capitalist company wants weaker labour rights. But, that includes the car companies making normal human-driven cars, it includes the companies manufacturing city buses and trains. There’s nothing special about Waymo / Google in that regard.
Sure, strengthening labour rights would be a good idea, but I don’t think it really has anything to do with Waymo. But, sure, we should organize and unionize Google if that’s at all possible.
Transit is incredibly underfunded and misregulated in California/the USA
Sure. That has nothing to do with Waymo though.
robotaxis are a criminal misinvestment in resources.
Misinvestment by whom? Google? What should Google be investing in instead?
I mean Waymo is way better at their job than Tesla and are more responsible, but this rant makes them out to seem perfectly safe. Whilst they are miles safer than Tesla, they still struggle with edge cases and aren’t perfect.
Human drivers struggle with edge cases also. I’ve seen a lot you drive, and as an old medic who has done his share of MV accidents, I can tell you y’all ain’t that good at it.
While I have no dog in this hunt, all any self driving vehicle needs to be is just a bit better than a human one to be an improvement and a net win, (never let perfect be the enemy of good enough). And historically, as soon as any new technology becomes affordable, humans adopt it and use the snot out of it. The problem is, humans aren’t very good at projecting future harm that any new tech tends to drag along with it.
All other things being equal, it would save a lot of lives to replace every human driver with a Waymo car right now. They’re already significantly better than the average driver.
But, there are a few caveats. One is that so far they’ve only ever driven under relatively easy conditions. They don’t do any highway driving, and they’ve never driven in snow. Another one is that because they all share one “mind”, we don’t know if there are failure modes that would affect every car. Every human driver is different, but every human is more or less the same. If a human sees a 100 km/h or 60 mph speed limit on a narrow, twisty, suburban street with poor visibility, most of them are probably going to assume it was a mistake and won’t actually try to drive 100 km/h. We don’t know if a robo-vehicle will do that. AFAIK they haven’t found any way to emulate “common sense”. They might also freak out during an eclipse because they’ve never been trained for that kind of lighting. Or they might try to drive at normal speeds when visibility is obscured by forest fire smoke.
There’s also the side effects of replacing millions of drivers with robo-cars. What will it do to people who drive for a living? Should Google/Waymo be paying most of the cost of retraining them? Paying their bills until they can find a new job? What will it do to cities? Will it mean that we no longer need parking lots because cars come and drop people off and then head off to take care of someone else? Or will it mean empty cars roaming the city causing gridlock and making it hell for pedestrians and bikers? Will people now want to live in the city because they don’t need to pay for parking and can get a car easily whenever they need one? Or will people now want to live even farther out into the suburbs / rural areas because they don’t need to drive and can work in the car on the way into the city?
Personally, I’m hopeful. I think they could make cities better. But, who knows. We should move slowly until we figure things out.
AFAIK they’re as safe as SawStop table saws. There has only ever been one collision involving a Waymo car that resulted in a serious injury. It was when a driver in another car, who was fleeing from police, sideswiped two cars, went onto the sidewalk and hit 2 pedestrians. One of the cars that was hit was a Waymo car, and the passenger was injured. Obviously, this wasn’t the fault of Waymo, but it was included in their list of 25 crashes with injuries, and was the only one involving a serious injury.
Of the rest, 17 involved the Waymo car being rear-ended. 3 involved another car running a red light and hitting the Waymo car. 2 were sideswipes caused by the other driver. 2 were vehicles turning left across the path of the Waymo car, one a bike, one a car. One was a Waymo car turning left and being hit on the passenger side. It’s possible that a few of these cases involving a collision between a vehicle turning and a vehicle going straight could be at least partially blamed on the Waymo car. But, based on the descriptions of the crashes it certainly wasn’t making an obvious error.
IMO it would be hard to argue that the cars aren’t already significantly safer than the average driver. There are still plenty of bugs to be ironed out, but for the most part they don’t seem to be safety-related bugs.
If the math were simple and every Waymo car on the road meant one human driver off the road with no other consequences or costs, it would be a no-brainer to start replacing human drivers with Waymo’s tech. But, of course, nothing is ever that simple.
Source: https://www.understandingai.org/p/human-drivers-are-to-blame-for-most
I was a Waymo stan before Tesla made it cool!
https://fuelarc.com/news-and-features/insurer-study-waymo-is-12-5-times-safer-than-human-drivers/
Seriously, though. I am an avowed enemy of the grim reaper, I’m a fan of Volvos for the same reason. And I like how transparent Waymo is with their data. The independent study linked is really illuminating if you like automotive safety stats.
That was great, the first comparison that came to mind after reading it was they are both a game of russian roulette…
Waymo - you get one chamber loaded with a blank, might kill you if you get it.
Tesla - you get one empty chamber… And the gun is loaded by your worst enemy
Excellent work
Really good analogy. loved this
This put a smile on my face.
Waymo is so much better, yeah. No problems with waymo. except all the times they almost hit me.
Waymo times than Teslas?
Ba dum tish
Awesome read, thanks!
Wow…
Remember guys, Tesla wants to have a living person sitting behind the wheel for “safety.” Don’t YOU want to get paid minimum wage to sit in a car all day, paying attention but doing nothing unless it’s about to crash, at which point you’ll be made the scapegoat for not preventing the crash?
Welcome to the future, you’re gonna hate it here.
I mean, compared to getting minimum wage flipping burgers in a hot kitchen, or picking vegetables in the sun, or working the register in a store in a bad neighborhood, or even restocking stuff at Walmart… yes, I would sit all day in an air conditioned car doing nothing but “paying attention”.
The unfortunate thing about people is we acclimatise quickly to the demands of our situation. If everything seems OK, the car seems to be driving itself, we start to pay less attention. Fighting that impulse is extremely hard.
A good example is ADHD. I have severe ADHD so I take meds to manage it. If I am driving an automatic car on cruise control I find it very difficult to maintain long term high intensity concentration. The solution for me is to drive a manual. The constant involvement of maintaining speed, revs, gear ratio, and so on mean I can pay attention much easier. Add to that thinking about hypermiling and defensive driving and I have become a very safe driver, putting about 25-30 thousand kms on my car each year for over a decade without so much as a fender bender. In an automatic I was always tense, forcing focus on the road, and honestly it hurt my neck and shoulders because of the tension. In my zippy little manual I have no trouble driving at all.
So imagine that but up to an even higher level. Someone is supervising a car which handles most situations well enough to make you feel like a passenger. They will switch off and stop paying attention eventually. At that point it is on them, not the car itself being unfit. I want self driving to be a reality but right now it is not. We can do all sorts of driver assist stuff but not full self driving.
A good example is ADHD. I have severe ADHD so I take meds to manage it. If I am driving an automatic car on cruise control I find it very difficult to maintain long term high intensity concentration. The solution for me is to drive a manual. The constant involvement of maintaining speed, revs, gear ratio, and so on mean I can pay attention much easier. Add to that thinking about hypermiling and defensive driving and I have become a very safe driver, putting about 25-30 thousand kms on my car each year for over a decade without so much as a fender bender. In an automatic I was always tense, forcing focus on the road, and honestly it hurt my neck and shoulders because of the tension. In my zippy little manual I have no trouble driving at all.
Are you me? I love weaving through traffic as fast as I can… in a video game (like Motor Town behind the wheel). In real life I drive very safe and it is boring af for my ADHD so I do things like try to hit the apex of turns just perfect as if I was driving at the limit but I am in reality driving at a normal speed.
Part of living with severe ADHD is you don’t get breaks from having to play these games to survive everyday life, as you say it is a stressful reality in part because of this. You brought up a great point too that both of us know, when our focus is on something and activated we can perform at a high level, but accidents don’t wait for our focus, they just happen, and this is why we are always beating ourselves up.
We can look at self driving car tech and intuit a lot about the current follies of it because we know what focus is better than anyone else, especially successful tech company execs.
I’m glad other people understand the struggles required for daily life in this respect
You seem to have missed the point. Whether or not you think that would be an easy job, the whole reason you’d be there is to be the one that takes all the blame when the autopilot kills someone. It will be your name, your face, every single record of your past mistakes getting blasted on the news and in court because Elon’s shitty vanity project finally killed a real child instead of a test dummy. You’ll be the one having to explain to a grieving family just how hard it is to actually pay complete attention every moment of every day, when all you’ve had to do before is just sit there.
How about you pay attention and PREVENT the autopilot from killing someone? Like it’s your job to do?
This is sarcasm, right?
Expecting people to be able to behave like machines is generally the attitude that leads to crash investigations.
Behave like machines? Wtf are you on about? It’s paying attention and preventing accidents. Like a train conductor does. Or a lifeguard. Or a security guard. I get the tesla hate, but this is ridiculous.
Lifeguards have very short periods of diligence before they take mandatory breaks in an extremely controlled environment. Train conductors operate on grade separated infrastructure. Security Guards do not have to take split second action or die.
Putting a warm body in a mind-numbing situation and requiring split second response to a life or death situation at a random time is a recipe for failure.
Well, put the drivers on a similar mandatory break schedule. Done.
So it emulates a standard BMW driver. Well done.
Still work to be done, it uses the blinkers.
At least they were used incorrectly to be just as unpredictable.
Tesla driver are the new BMW drivers. And since Tesla uses their customers driving data to train their AI it’s not a surprise that the AI drives like an asshole.
Oh, stop your complaining. It’s not perfect, but we’ve all seen how easy this is to fix. Just barge into Tesla tomorrow and randomly fire 20% of the employees. That’s how real leaders get things done.
/s
Haaa, finally !! An AI taxi that behaves like a normal taxi driver. It must feel so refreshing.
Doesn’t sound too safe
So, just the right amount of safe then?
And this is why DOGE gutted the Office for Vehicle Automation Safety at the NHTSA.
I thought that was to economize for expenses?!
So naturally they started with 5 employees in the smallest office of one of the smallest divisions of the NHTSA. Nooooo ulterior motive, nosiree
this would get a normal person’s car impounded and drivers license revoked. why can a company get away with it?
Systemic corruption.
Regulatory
capturedecapitationIt wouldn’t say corruption, I think it’s more that the law around the road was designed with a driver in mind, not with a company or even a robot. the consequences have been thought to hurt a person at fault because at the time only a person could drive
Its very convenient that corporations can both be people and not be people depending on whatever outcome is best for them.
Elon has enough fuck-you money to pay off anyone who would’ve complained.
He also paid his way into a government position to shut down the government offices that opposed him.
They had so many cameras on this car, how many laws do you think each average driver breaks every 22 minutes?
It would be interesting if they could figure out why the car chose to do these specific things,
The rent seeking is so hard with this automate-the-profits bullshit.
The moment we perfect auto-taxis the service should be a public benefit and run by a nonprofit.
NYC Mayoral candidate Mamdani is talking about making busses free, and that makes a radical shitload of sense.
Free autotaxis would be a boon for productivity and personal freedom, like AI promises to be but democratized for everybody rather than just the richest fraction of a percent.
People are going to take a shit in them. And ride them around for fun
Guess what? People already do that.
ride them around for fun
Imagine the horror!
Thanks for pointing out how insane and disconnected the elon glazers are in believing their Teslas will drive off while they sleep to earn any kind of positive cash flow, then show up back home just in time to recharge for the commute to work, smelling fresh as a daisy.
I don’t see a problem with the second one. The bus is already doing the route, it costs basically nothing to have a few joy riders.
People are going to take a shit in them.
Sure, somebody will. But the system will take note of that person, and then they don’t get to ride again. Or they have to pay a fine. Or whatever.
Parking in a fire lane to drop off a passenger just makes it seem more human.
Yea, this one isn’t an issue. If you are dropping off passengers, you are allowed to stop in a fire lane because that is not parking.
Which brings up an interesting question, when is a driverless car ‘parked’ vs. ‘stopped’?
When the engine is off?
Of course, how to tell this with an electric car?
Yeah, tell that to police who bust people with DUIs when the engine is still off.
When the motor drivers are energized?
They turned the empathy dial to 5%. Works great, right?
Navigation issue / hesitation
The video really understates the level of fuck up that the car did there…
And the guy sitting there just casually being ok with the car ignoring the forced left going straight into oncoming lanes and flipping the steering wheel all over the place because it has no idea what the hell just happened… I would not be just chilling there…
Of course, I wouldn’t have gotten in this car in the first place, and I know they cherry picked some hard core Tesla fans to be allowed to ride at all…
I’ve come to the realization, at least where I live, that a hell of a lot of accidents are prevented because of drivers who are actually aware and safe. This goes a bit beyond defensive driving IMO. I’m talking flat out accident avoidable. There is an entire class of drivers who are not even aware of the accidents they have almost caused because someone else managed to avoid their stupid driving.
The majority of accidents that are likely to happen with these robocoffins will be single car or robocoffin meets robocoffin. The numbers on safety after a year will be acceptable because non accident causing error prone driving is not reported in any official capacity.
I still maintain that the only safe way to have autonomous vehicles on the road is if they do not share the road with human drivers and have an open standard for communicating with other autonomous cars.
open standard
Soery, no, that’s infrastructure.
But Musk told me it’s ready for primetime, why would he lie?
It is probably being remotely driven from India and they just lost wifi for a minute.
To quote AVCH, “His controller disconnected.”
Hehe got it in one.
Some people will find him unbearable or a bit repetitive, but he really enjoys himself.
Favorite phrases of his seem to be Apocalyptic Dingleberry His name is John Sena Woa Woa Woa. Play stupid games win stupid prizes. NPC move. Need to know when to pull out. You’re not in the UK now.
AI=ALways indian.
AI = Actually Indians
You can tell it’s a Tesla because of the way it is.
I am entirely opposed to driving algorithms. Autopilot on planes works very well because it is used in open sky and does not have to make major decisions about moving in close proximity to other planes and obstacles. Its almost entirely mathematical, and even then in specific circumstances it is designed to disengage and put control back in the hands of a human.
Cars do not have this luxury and operate entirely in close proximity to other vehicles and obstacles. Very little of the act of driving a car is math. It’s almost entirely decision making. It requires fast and instinctive response to subtle changes in environment, pattern recognition that human brains are better at than algorithms.
To me this technology perfectly encapsulates the difficulty in making algorithms that mimic human behavior. The last 10% of optimization to make par with humans requires an exponential amount more energy and research than the first 90% does. 90% of the performance of a human is entirely insufficient where life and death is concerned.
Investment costs should be going to public transport systems. They are more cost efficient, more accessible, more fuel/resource efficient, and far far far safer than cars could ever be even with all human drivers. This is a colossal waste of energy time and money for a product that will not be par with human performance for a long time. Those resources could be making our world more accessible for everyone, instead they’re making it more accessible for no one and making the roads significantly more dangerous. Capitalism will be the end of us all if we let them. Sorry that train and bus infrastructure isnt “flashy enough” for you. You clearly havent seen the public transport systems in Beijing. The technology we have here is decades behind and so underfunded its infuriating.
This technology purely exists to make human drivers redundant and put the money in the hands of big tech and eventually the ruling class composed off of politicians risk averse capitalists and beurocracy. There is no other explanation for robo taxis to exist. There are better solution like trains and metros which can solve the movement of people from point A to point B easily. It does not come with a 3x-10x capital growth that making human drivers redundant will for the big tech companies.
This technology purely exists to make human drivers redundant and put the money in the hands of big tech and eventually the ruling class composed off of politicians risk averse capitalists and beurocracy. There is no other explanation for robo taxis to exist.
There is another reason, though, and it’s much simpler. Basic greed.
There are people who see the opportunity to make more money for themselves, so they’ll do it. When it comes to robo taxis, they’re not interested in class struggles, it’s not about politics, their interest in making human drivers redundant extends only so far as increasing their customer base. These aren’t Machiavellian schemers rubbing their hands together and cackling at their dark designs coming to fruition, it’s just assholes in suits who’s one and only concern is “number go up.”
Even when it comes to their politics and to the class dynamics, their end goal is always the same. Number go up. They don’t care about what harm it could do. They’re not intent on deliberately doing more harm, they give no thought to doing less harm, they do not care. All that drives them, ever, is Number Go Up.
You got downvoted but you’re right. The only cabal at work here is basic human greed. Anytime you want to know why people do something, consider the motivation of the person and the incentives. Musk constantly talks about how autonomy will make his company worth “trillions”, and he wants that because he’ll keep maxing the high score in Billionaire Bastard Bacchanalia.
He can claim noble intentions, but as you said, the game is simply to make Number Go Up. That it causes untold harm to others isn’t even an afterthought.
Public transport systems are just part of a mobility solution, but it isn’t viable to have that everywhere. Heck, even here in The Netherlands, a country the size of a post stamp, public transport doesn’t work outside of the major cities. So basically, outside of the cities, we are also relying on cars.
Therefore, I do believe there will be a place for autonomous driving in the future of mobility and that it has the potential to reduce number of accidents, traffic jams and parking problems while increasing the average speed we drive around with.
The only thing that has me a bit worried is Tesla’s approach to autonomous driving, fully relying on the camera system. Somehow, Musk believes a camera system is superior to human vision, while it’s not. I drive a Tesla (yeah, I know) and if the conditions aren’t perfect, the car disables "safety’ features, like lane assist. For instance when it’s raining heavily or when the sun is shining directly into the camera lenses. This must be a key reason in choosing Austin for the demo/rollout.
Meanwhile, we see what other manufacturers use and how they are progressing. For instance, BMW and Mercedes are doing well with their systems, which are a blend of cameras and sensors. To me, that does seem like the way to go to introduce autonomous driving safely.
There’s usually buses from villages into the major cities though, it live in one and there’s a bus every hour to go to a nearby city, from where I can then take a train. I wouldn’t say it’s that bad
Depends on how far you live from the city I guess, where I live it’s 2 hours to major cities. But anyways, 1 hr wait to get somewhere doesn’t feel desirable to me. It just doesn’t provide enough coverage to fully replace a car.
I believe Austin was chosen because they’re fairly lax about the regulations and safety requirements.
Waymo already got the deal in Cali. And Cali seems much more strict. Austin is offering them faster time to market as the cost of civilian safety.
I’ve been saying for years that focusing on self driving cars is solving the wrong problem. The problem is so many people need their own personal car at all.
Exactly. Bring back trams, build less suburbs, better apartment housing. If we want a society reorganized around accessibility then let’s actually build that.
While I agree focusing on public transport is a better idea, it’s completely absurd to say machines can never possibly drive as well as humans. It’s like saying a soul is required or other superstitious nonsense like that. Imagine the hypothetical case in which a supercomputer that perfectly emulates a human brain is what we are trying to teach to drive. Do you think that couldn’t drive? If so, you’re saying a soul is what allows a human to drive, and may as well be saying that God hath uniquely imbued us with the ability to drive. If you do think that could drive, then surely a slightly less powerful computer could. And maybe one less powerful than that. So somewhere between a casio solar calculator and an emulated human brain must be able to learn to drive. Maybe that’s beyond where we’re at now (I don’t necessarily think it is) but it’s certainly not impossible just out of principle. Ultimately, you are a computer at the end of the day.
I never did say it wouldn’t ever be possible. Just that it will take a long time to reach par with humans. Driving is culturally specific, even. The way rules are followed and practiced is often regionally different. Theres more than just the mechanical act itself.
The ethics of putting automation in control of potentially life threatening machines is also relevant. With humans we can attribute cause and attempted improvement, with automation its different.
I just don’t see a need for this at all. I think investing in public transportation more than reproduces all the benefits of automated cars without nearly as many of the dangers and risks.
Driving is culturally specific, even. The way rules are followed and practiced is often regionally different
This is one of the problems driving automation solves trivially when applied at scale. Machines will follow the same rules regardless of where they are which is better for everyone
The ethics of putting automation in control of potentially life threatening machines is also relevant
You’d shit yourself if you knew how many life threatening machines are already controlled by computers far simpler than anything in a self driving car. Industrially, we have learned the lesson that computers, even ones running on extremely simple logic, just completely outclass humans on safety because they do the same thing every time. There are giant chemical manufacturing facilities that are run by a couple guys in a control room that watch a screen because 99% of it is already automated. I’m talking thousands of gallons an hour of hazardous, poisonous, flammable materials running through a system run on 20 year old computers. Water chemical additions at your local water treatment plant that could kill thousands of people if done wrong, all controlled by machines because we know they’re more reliable than humans
With humans we can attribute cause and attempted improvement, with automation its different.
A machine can’t drink a handle of vodka and get behind the wheel, nor can it drive home sobbing after a rough breakup and be unable to process information properly. You can also update all of them all at once instead of dealing with PSA canpaigns telling people not to do something that got someone killed. Self driving car makes a mistake? You don’t have to guess what was going through its head, it has a log. Figure out how to fix it? Guess what, they’re all fixed with the same software update. If a human makes that mistake, thousands of people will keep making that same mistake until cars or roads are redesigned and those changes have a way to filter through all of society.
I just don’t see a need for this at all. I think investing in public transportation more than reproduces all the benefits of automated cars without nearly as many of the dangers and risks.
This is a valid point, but this doesn’t have to be either/or. Cars have a great utility even in a system with public transit. People and freight have to get from the rail station or port to wherever they need to go somehow, even in a utopia with a perfect public transit system. We can do both, we’re just choosing not to in America, and it’s not like self driving cars are intrinsically opposed to public transit just by existing.
What are you anticipating for the automated driving adoption rate? I’m expecting extremely low as most people cannot afford new cars. We are talking probably decades before there are enough automated driving cars to fundamentally alter traffic in such a way as to entirely eliminate human driving culture.
In response to the “humans are fallible” bit ill remark again that algorithms are very fallible. Statistically, even. And while lots of automated algorithms are controlling life and death machines, try justifying that to someone who’s entire family is killed by an AI. How do they even receive compensation for that? Who is at fault? A family died. With human drivers we can ascribe fault very easily. With automated algorithms fault is less easily ascribed and the public writ large is going to have a much harder time accepting that.
Also, with natural gas and other systems there are far fewer variables than a busy freeway. There’s a reason why it hasn’t happened until recently. Hundreds of humans all in control of large vehicles moving in a long line at speed is a very complicated environment with many factors to consider. How accurately will algorithms be able to infer driving intent based on subtle movement of vehicles in front of and behind it? How accurate is the situational awareness of an algorithm, especially when combined road factors are involved?
Its just not as simple as its being made out to be. This isnt a chess problem, its not a question of controlling train cars on set tracks with fixed timetables and universal controllers. The way cars exist presently is very, very open ended. I agree that if 80+% of road vehicles were automated it would have such an impact on road culture as to standardize certain behaviors. But we are very, very far away from that in North America. Most of the people in my area are driving cars from the early 2010s. Its going to be at least a decade before any sizable amount of vehicles are current year models. And until then algorithms have these obstacles that cannot easily be overcome.
Its like I said earlier, the last 10% of optimization requires an exponentially larger amount of energy and development than the first 90% does. Its the same problem faced with other forms of automation. And a difference of 10% in terms of performance is… huge when it comes to road vehicles.
I always have the same thought when I see self driving taxi news.
“Americans will go bankrupt trying to prop up the auto/gas industries rather than simply building a train”.
And it’s true. So much money is being burned on a subpar and dangerous product. Yet we’ve just cut and cancelled extremely beneficial high speed rail projects that were overwhelmingly voted for by the people.