Police in England installed an AI camera system along a major road. It caught almost 300 drivers in its first 3 days.::An AI camera system installed along a major road in England caught 300 offenses in its first 3 days.There were 180 seat belt offenses and 117 mobile phone
Photos flagged by the AI are then sent to a person for review.
If an offense was correctly identified, the driver is then sent either a notice of warning or intended prosecution, depending on the severity of the offense.
The AI just “identifying” offenses is the easy part. It would be interesting to know whether the AI indeed correctly identified 300 offenses or if the person reviewing the AI’s images acted on 300 offenses. That’s potentially a huge difference and would have been the relevant part of the news.
The system we use in NL is called “monocam”. A few years ago it caught 95% of all offenders.
This means that AI had at most 5% false negatives.
I wonder if they have improved the system in the mean time.
Nobody cares about false negatives. As long as the number isn’t something so massive that the system is completely useless false negatives in an automatic system are not a problem.
What are the false positives? Every single false positive is a gross injustice. If you can’t come up with a number for that, then you haven’t even evaluated your system.
The system works with AI signaling phone usage by driving.
Then a human will verify the photo.
AI is used to respect people’s privacy.
The combination of the AI detection+human review leads to a 5% false negative rate, and most probably 0% false positive.
This means that the AI missed at most 5% positives, but probably less because of the human reviewer not being 100% sure there was an offense.
Look, I’m not saying it’s a bad system. Maybe it’s great. “Most probably 0%” is meaningless though. If all you’ve got is gut feelings about it, then you don’t know anything about it. Humans make mistakes in the best of circumstances, and they get way, way worse when you’re telling them that they’re evaluating something that’s already pretty reliable. You need to know it’s not giving false positive, not have a warm fuzzy feeling about it.
Again, I don’t know if someone else has already done that. Maybe they have. I don’t live in the Netherlands. I don’t trust it until I see the numbers that matter though, and the more numbers that don’t matter I see without the ones that do, the less I trust it.
The fine contains a letter, a picture and payment information. If the person really wasn’t using their phone, they can file a complaint and the fine will be dismissed. Seems pretty simple to me.
However, I have not heard any complaints about it in the news and an embarrassing amount of fines has been given for this offense.
How do they know that they caught 95% of all offenders if they didn’t catch the remaining 5%? Wouldn’t that be unknowable?
Welcome to the world of training datasets.
There are many ways to go about it, but for a limited number they’d probably use human analysts.
But in general, they’d put a lot more effort into a chunk of data and use that as the truth. It’s not a perfect method but it’s good enough.
The article didn’t really clarify that part, so it’s impossible to tell. My guess is, they tested the system by intentionally driving under it a 100 times with a phone in your hand. If the camera caught 95 of those, that’s how you would get the 95% catch rate. That setup has the a priori information on about the true state of the driver, but testing takes a while.
However, that’s not the only way to test a system like this. They could have tested it with normal drivers instead. To borrow a medical term, you could say that this is an “in vivo” test. If they did that, there was no a priori information about the true state of each driver. They could still report a different 95% value though. What if 95% of the positives were human verified to be true positives and the remaining 5% were false positives. In a setup like that we have no information about true or false negatives, so this kind of test setup has some limitations. I guess you could count the number of cars labeled negative, but we just can’t know how many of them were true negatives unless you get a bunch of humans to review an inordinate amount of footage. Even then you still wouldn’t know for sure, because humans make mistakes too.
In practical terms, it would still be a really good test, because you can easily have thousands of people drive under the camera within a very short period of time. You don’t know anything about the negatives, but do you really need to. This isn’t a diagnostic test where you need to calculate sensitivity, specificity, positive predictive value and negative predictive value. I mean, it would be really nice if you did, but do you really have to?
Just to clarify the result: the article states that AI and human review leads to 95%.
Could also be that the human is flagging actual positives, found by the AI, as false positives.
You wouldn’t need people to actually drive past the camera, you could just do that in testing when the AI was still in development in software, you wouldn’t need the physical hardware.
You could just get CCTV footage from traffic cameras and feeds that into the AI system. Then you could have humans go through independently of the AI and tag any incident they saw in a infraction on. If the AI system gets 95% of the human spotted infractions then the system is 95% accurate. Of course this ignores the possibility that both the human and the AI miss something but that would be impossible to calculate for.
That’s the sensible way to do it in early stages of development. Once you’re reasonably happy with the trained model, you need to test the entire system to see if each part actually works together. At that point, it could be sensible to run the two types of experiments I outlined. Different tests different stages.
I think 95% were correct reports is what they mean. There could be a massive population of other offenders that continue sexting and driving or worse. One monocam won’t ever be enough we need many monocams. Polymonocams.
I suspect they sent through a controlled set of cars where they tested all kinds of scenarios.
Other option would be to do a human review after installing it for a day.
deleted by creator
Yeah that’s a shame, would be really interested to know those numbers.
I love threads like these because it really shows how flexible opinions are, post about ai surveillance state and everyone is against it but post about car drivers getting fined for not wearing a seatbelt and everyone loves it.
This is a weird phenomenon. Feels a bit like how focusing on “welfare queens” / “dole bludgers” can pave the way for similar privacy erosion (and welfare cuts) even though its a tiny percentage of the people. Seems a short hop away from “if you’ve got nothing to hide…”
Except in this case being a poor driver actively puts others at risk rather than just being a drain on tax money.
Seatbelts I don’t really care about, because with that people mostly just affect themselves (or others in the same car), but for other infractions it makes sense.
The real issue is whether you can trust that the data will only be used for its intended purpose, as right now there are basically no good mechanisms to prevent misuse.
If we had cameras where you could somehow guarantee that - no access for reason other than stated, only when flagged or otherwise by court order, all access to footage logged with the audit log being publicly available, independent system flagging suspicious accesses to any footage, etc. - it wouldn’t be too bad.
Compared to all the private cameras that exist in cars these days…
You know the best way to not have absolute power corrupt? Not have absolute power.
If you collect this data there is degree of probability that eventually it will be abused. If you don’t collect this data there is zero chance.
Some > none
Good government is about assuming the worse and decided if you are willing to endure that. If the absolute worse humans you can imagine were put into office how much bad can they do?
Surely the ultimate come away from that is will not ok with people breaking the law and we’re not ok with AI taking people’s jobs. There is no conflict here
So you think most people like the idea of a surveillance state automaticly enforcing it’s every whim with perfect efficiency?
I’m pretty sure that’s something pretty much universally disliked
You don’t know many authoritarian.
Course, they only think this should apply to everyone elae.
I don’t think I said that did I
I just wish they would have one where I live to fine all the people using the HOV lane who aren’t supposed to be
Then we watch the numbers plummet and see there’s only actually 5% of people using the lane and finally see how useless the hiv lane is so we can just make it a regular third lane.
The HOV lane is supposed to look empty. If it was packed full of cars, carpooling wouldn’t have any advantage because you wouldn’t go any faster.
It doesn’t work that well around here, cause there’s inevitably that one car that refuses to go faster than the rest of the traffic that it’s separated from. Or slows down to 10mph when the rest of the highway is stop and go, despite there being a barrier. Then someone gets rear ended because no one was expecting the lane to be going 10mph (and were on their phone), and the accident closes down the lane entirely
Basically, by me, the HOV lane is slower than traffic 90% of the time. Even in stop and go, because that lane is actually the one containing the accident causing the traffic.
Well, uhh, sounds like you could use some more traffic enforcement there. Maybe with AI and cameras ;)
I thought the advantage of carpooling was saving money on gas and car maintenance. Also, environment.
In it’s current form it’s good technology. It’s all fine as long as you’re chasing after crimes we all agree are bad* It’s the slippery slope I’m worried about. Just a matter of time untill this is going to be used for something malicious we don’t agree with.
*I don’t care if front seat passengers wear a seatbelt or not as long as they’re adults.
The slippery slope is what makes this not okay. It’s a completely unnecessary invasion of privacy in the guise of “safety”.
I’d love to see some statistics showing that these things are anything other than an additional tax on the drivers. This is bad for everyone and it desensitizes you and opens the door to further surveillance I’m the future.
“Slippery slope” is a common argument but usually flawed. In this case, driving is an extraordinarily regulated privilege and despite that, it still results in massive deaths and permanent life changing injury every year. In the US, car crashes are the number one cause of death for children. It’s difficult to draw a line between expanding driving enforcement to gross losses in privacy like many here are envisioning.
It also ignores the benefits to civil rights. Again, I don’t know about the UK but in the US, traffic enforcement by police is very unevenly applied. Minorities routinely get their privacy violated on pretexts while cops don’t even pay lip service to the rules.
I am just waiting for the article in the year that shows this system falsely reports darker skin people as breaking the law more often. It sees their hand and decides that the hand looks like a black cellphone or something.
Just like literally every other automated system with a camera that evaluates people.
Ugh I wish that wasn’t plausible…
Just as an aside, gun violence is now the leading cause of death for children in the US; vehicle collisions are now 2nd, due to gun violence increasing and vehicle collisions decreasing.
It isn’t though.
It isn’t unnecessary invasion of privacy. You have no expectation of privacy when driving around on public streets, and to say you’re allowed to break the law and use personal privacy as an excuse is absurd.
While I don’t disagree with the statement around privacy in public, I would encourage you to temper that thought with the realization that when that was developed we did not have the ability to be everywhere at once with cameras or fly drones over people’s homes or track cellphones with GPS or use computers to process this information.
This information can and has been abused.
Maybe we should change our expectations to SOME privacy in public.
If there was no expectation of privacy why do governments get upset about window tinting and license plate laser blocking and radar detectors? It should be no different than curtains, shutters, and any other form of passive radio.
Yeah people say this but it isn’t really true. If I was following, posting logs, taking photos, posting online those photos and logs of some kid in your family I am pretty sure this would bother you. Way back in my uni days there was an incident about someone doing that to the coeds on campus. The school was able to stop it solely because he used the school computer not by some legal mechanism.
You only think you have no expectation of privacy when no one tries to violate it.
Happens to celebrities. The reason it doesn’t happen to me is I’m not very interesting.
But it been annoying isn’t really the point it’s not how the law works. I don’t make the law, I’m just pointing out that how the law works, and under the law you have no expectation of privacy in public.
A beautiful strawman. This is about driving and traffic enforcement by the government, not creepy campus stalking by a crazy person.
There is no conceivable reality where the government will publicly post your movements for everyone to see based this system. None.
Does expectation of privacy disappear if there is no abuse? I wonder because expectation of privacy is about belief not based on motivations or integrity of others.
You’re still beating up that strawman. Expectations of privacy change based on context. Driving = no. Walking around = yes.
At least in the US, I believe this is actual legal case law so I’m not making stuff up here.
I am not okay with this. Seatbelt wearing is a private matter. Yes, I wear mine.
The issue is these people getting into accidents requiring preventable extensive medical help is not just a private matter.
Very well. Maybe we should start fining people for being fat or not working out or not eating enough veggies.
Leave it to the car insurance companies to take a great idea like universal healthcare and use it to restrict our rights.
I’m sorry but I don’t agree that just because you want to do something means you should have an automatic right to do that thing. Freedom to do what you want has to be tempered against the damage that is done to society by you doing that thing.
Yeah take not wearing a seatbelt, the damage to society, the amount of money society has to expend if you mess up and crash is a lot higher if you’re not wearing a seat belt than if you were. Given that the damage to society, the amount of money your actions cost to fix, I think it’s acceptable that the ability to not wear a seatbelt is a restricted freedom.
We don’t all live in a universe where every action we take has no consequences. Every time you decide to be an idiot, you are not just affecting yourself, but everyone else as well.
Take smoking in public, that freedom has been restricted in most countries in the world because quite a lot of people don’t want to have to breathe in your smoke. It’s not about you, it’s about how your actions affect everyone else.
Selfish people don’t like this because they think that they should be allowed to be a jackass to everyone and no one else should have the right or authority to prevent them from doing that. The jackasses are by default not operating within the established rules of civilisation, they wish to be independent of it but still make use of it.
And to put it technically, they can sod off.
I am not pro seat belt laws. It is your life and you should be able to throw it away if you want.
Not wearing a seatbelt reduces the security of others. If you want to throw it away, that’s a different matter and should not be handled through seat belt laws.
Please show me the multiple double blind studies that you used to arrive at that conclusion.
Source: I fucking made it up.
But isn’t it simple logic? Maybe a driver pulls the wheel a bit too hard, due to having no belt loses balance, boom, he hits someone.
It might be “logical” but I prefer evidence-based policy. Especially when we are restricting individual rights.
What if it was something you cared about? What if I don’t know your favorite form of music was going to be criminalized, would you accept “logical” as justification?
That is a ridiculous straw man arguement and the fact you came up with it basically indicates you have no actual interest in a proper debate.
We are taking not wearing a seat belt against restricting music.
One of them has real obvious reasons for restricting it decreases everybody else’s safety by you not wearing a seatbelt.
There is no public safety consideration for banning a particular genre of music.
It is appropriate to impose limited freedom restrictions in cases where not doing so would result in potential issues for other people.
For example you are not allowed to play excessively loud music after a certain amount of time because that affects other people. But music is not banned outright and a genre of music would never be banned outright because that would be obviously ridiculous.
If you’re going to have this conversation at least be reasonable.
Answering questions is easier than attacking questions.
The UK has never seen a dystopian nightmare they didn’t rush to embrace.
ITT a bunch of people who have never read an ounce of sci fi (or got entirely the wrong message and think law being enforced by robots is a good thing)
But the law isn’t enforced by robots the law is enforced by humans. All that’s happening here is that the process of capturing transgressions has been automated. I don’t see how that’s a problem.
As long as humans are still part of the sentencing process, and they are, then functionally there’s no difference, if a mistake is being made it will be rectified at that time. From the process point of view there isn’t really any difference between being caught by an automated AI camera and being caught by a traffic cop.
Although completely reasonable, I fear that your conclusion is inaccessible for most folks.
And as a pedestrian, I’m all for a system that’s capable of reducing distracted driving.
How to disincentivize a motorist public is to make driving a stressful affair- currently, it’s other people. Soon, it’ll be catalogs of minor infractions caught, at the millisecond intervals they occur in, forever and the bill to pay it showing up every single week for the rest of your driving lives. Odds are it’s going to be scrapped, made a Boogeyman for a while, and then come back every time people get testy about gas prices
The trick to get people to not drive as much is to make public transportation easier not driving hard. All you accomplish by making driving hard is punishing the group of people who have the least agency.
Let me guess, you are urban planning.
You have never had to dispute one of those tickets I assume.
Almost a decade ago I got one in the mail for a city that is about 9 hours away from my house. I am going thru the dispute process and being told repeatedly that “I am tired of people claiming that it wasn’t them” with me suggesting that if their system worked they would most likely get fewer calls. Pure luck I noticed that the date is the exact date my daughter was born and thus the only way I could have been in that city is if I had somehow left my wife while she was in labor and managed to move my car 9 hours away. Once I pointed that out and that I could send them the birth certificate they gave up.
The problem with these systems is that they are trusted 100% and it becomes on the regular person to prove their innocence. Which is the exact opposite of what the relationship should be. If I get issued a ticket, it should be on the state to produce the evidence, not on me to get lucky.
If you read the article it makes it clear it wouldn’t get that far.
It goes to human operator who looks at the picture and says whether or not they can actually see a violation on the image. So it wouldn’t get as far as an official sanction so you wouldn’t have to go through that process.
I am sorry am I talking to myself? I just gave you a literal example of this not working.
It’s going to disproportionately target minorities. ML* isn’t some wonderful impartial observer, it’s subject to all the same biases as the people who made it. Whether the people at the end of the process are impartial or not barely matters either imo, they’re going to get the biased results of the ML looking for criminals so it’s still going to be a flawed system even if the human element is OK. Ffs please don’t support this kind of dystopian shit, Idk how it’s not completely obvious how horrifying this stuff is
*what people call AI is not intelligent at all. It uses machine learning, the same process as chatbots and autocorrect. AI is a buzzword used by tech bros who are desperate to “invest in the future”
Sounds like youd support precrime too. Stop licking boots
What the hell is precrime?
Calling an image recognition system a robot enforcing the law is such a stretch you’re going to pull a muscle.
It’s going to disproportionately target minorities. ML* isn’t some wonderful impartial observer, it’s subject to all the same biases as the people who made it. Whether the people at the end of the process are impartial or not barely matters either imo, they’re going to get the biased results of the ML looking for criminals so it’s still going to be a flawed system even if the human element is OK. Ffs please don’t support this kind of dystopian shit, Idk how it’s not completely obvious how horrifying this stuff is
*what people call AI is not intelligent at all. It uses machine learning, the same process as chatbots and autocorrect. AI is a buzzword used by tech bros who are desperate to “invest in the future”
deleted by creator
Face recognition data sets and the like tend to be pretty heavily skewed, they usually have a lot more white people than poc. You can see this when ML image filters turn black people into white people or literal gorillas. Unless the data set properly represents a super diverse set of people (and tbh probably even if it does), there’s going to be a lot of race based false positives/negatives
deleted by creator
That might be the case tbh, but either way that would be bad and discriminatory. I might just be overthinking it, it might not actually be that bad, but I know discrimination like that is super common when it comes to how recognition-based ML is trained
According to Sci-fi organ transplants will lead to the creation of monsters who will kill us all for “tampering in God’s domain.”
Maybe fiction isn’t the best way to determine policy…
Worked for Gattaca. GINI pretty much only exists because of it.
I for one base ALL my global policy on sci Fi novels 🤦♂️
Since the writers are on strike we can have them just write the entire legal code as the writers of black window are actually taken seriously beyond nerds for once.
Yeah!! Now they got them new fangled pooters in cars too!! No thank U Night Rider!!
Is the freedom to drive without feeling like you’re being watched more important than the prevention of texting while driving?
During my commute, it’s common to see people looking at their phones. I don’t know what the effect is without statistics, but seeing an accident along the way is a usual occurrence.
inattentive driving should be considered gross negligence
Can’t believe people still have the audacity to text while driving. I prefer reading a nice relaxing book.
Or painting a nice landscape
I’m more concerned about error rates and false accusations
Doesn’t it say that each image is sent to a human for review before any charges are laid? Might not be the case forever, but at least for now it’s actually a human who ultimately decides whether or not to prosecute a driver.
That’s the important part for me. As long as the whole process isn’t automated I’m fine with it.
Yes, obviously. Ffs how is this post so full of authoritarian assholes who think more law enforcement (not even done by real people mind you, but by a machine with no sense of nuance or anything) is the solution to anything other than strengthening a fascist government?
Leaving traffic safety to human police, discretion often means racist biases and outcomes.
No. Your freedom to feel feelings is your problem. If you feel like you’re not being observed right now, your feeling is already wrong.
Fuck ai and cameras and uk.
FUCK CAMERAS. THAT’S WHY I VOTED BREXIT. SO I CAN TEXT AND DRIVE, NOT LIKE THESE BAGUETTE LOVING SISSIES
Monitor my internet and tell me what I can do harder daddy
What?
I work in an adjacent industry and got a sales pitch from a company offering a similar service. They said that they get the AI to flag the images and then people working from home confirm - and they said it’s a lot of people with disabilities/etc getting extra cash that way.
This was about six months ago and I asked them, “there’s a lot of bias in AI training datasets - was a diverse dataset used or was it trained mostly on people who look like me (note: I’m white)?” and they completely dodged the question…
(this is definitely a different company as I am not in England)
Yes does the AI automatically send every taxi through or is it only when they are on the phone. Has the AI ever seen a taxi driver who’s not on the phone in order to check this?
“Hookay thanks for the presentation fellas, but lemme ask ya: Was your model trained only on iPhones or was a diverse palette of plastic Android phones from the last 15 years also taken into account?”
deleted by creator
He’ll yea use machines to strip people of their freedom and privacy in exchange for “safety” and “security”, that could never go wrong
I understand your pov but I feel it’s misplaced. You are in public in a vehicle. You are in public on a side walk. The same laws that have been used to record police are the same being used here. You have no expectation of privacy in public and if you are seen or recorded breaking a law that is on you.
The same laws that have been used to record police are the same being used here.
I don’t think you understand my point. It’s been made clear the First Amendment applies to filming anyone, including police, in public. Any policies that try to bypass that will be destroyed in court. Those same rules apply to all of us as well.
We can absolutely be recorded in public.
Yeah, given the tech today this needs to be revisited at the very least
I think deaths jumped a bit post COVID but I don’t think they are skyrocketing. Do you have a source?
deleted by creator
100% agree. It flags infractions, you have people verify what was being flagged, due course follows.
There is a name for that sort, the safer the item is the more reckless the person becomes.
Meanwhile in New Orleans
https://www.fox8live.com/2023/07/26/nopd-use-facial-recognition-leads-zero-arrests-nine-months/
I know this is gonna be a hot take, but I think there’s a huge opportunity to increase road safety using automation. Where I live the police have largely stopped bothering with minor traffic offenses due to problems with racial profiling, which solves the racial profiling issue but means that it’s very hard to drive so poorly you get pulled over.
It seems like simply ticketing people automatically for driving over the speed limit or running stop signs would be dirt cheap and massively improve driving standards. You wouldn’t even need to do facial recognition or anything, just use the same systems that are already in place for toll by plate to fine the vehicle owner.
Mellow greetings, citizen. This idea was a running gag in Demolition Man.
Enhance your calm, John Spartan.
I remember an opening to Seaquest DSV where the captain was riding his motorcycle to the base and a camera pops out of the ground, scans his plate, and he receives an email with the fine when he reached his destination. No other human involved, and this show was ~20 years ago.
It pains me to point out that it’s more like ~30 years ago. Rest in peace Jonathan Brandis.
I’m surprised car companies haven’t already partnered with governments to have the vehicles themselves snitch on the occupants. Why install these camera systems all over the place when the vehicles themselves collect ridiculous amounts of data with greater accuracy? I’m sure the car companies would love the additional revenue stream and the governments would love the greater surveillance capabilities.
Probably because they wouldn’t see a dime of revenue from this. It would be a new law that just says they have to do it. At best, they would be allowed to pass the costs to customers somehow, likely through our plate registrations at the DMV.
It’s basically a no win for the car companies. Lots of ill will, increased chance of litigation, increased costs for building cars, all for nothing.
In fact, I bet the car companies lobbyists are the reason we don’t have this already.
There were 180 seat belt offenses and 117 mobile phone
and 300.000 drivers privacy got violated by a single offender. Someone should gve the AI a fine. Oh wait “privacy” is not a word in the English language anymore, it is just gibberish with no meaning.
Found the shitty driver
You don’t have an expectation of privacy while driving on a public road and you never did.
I have an expectation of existing and having privacy because not everywhere is a camera. If you don’t have that expectation anymore, then that’s sad.
There is a huge difference between letting an AI check EVERY car ALL THE TIME, or police doing random checks on random roads. One is a privacy violation for some to find some people texting and driving and some people wearing no seatbelt, which then leads to more awareness of everyone about these issues. The other is treating all your citizens as potential “criminals” driving without seatbelt and texting and driving and therefore making it normal to violate everyones privacy.
A government that starts to treat all citizens like potential criminals all the time and put them on camera on every street and in their car and on public transport, in school and at work… is not a government that is on your side and wants to protect you, that’s a prison guard.
“Expectation of privacy” is a legal term with an agreed-upon definition that isn’t subject to your intuition or what you consider to be “sad”.
I know it is a legal term. It was meant to help protect peoples privacy, but got perverted to now mean that privacy only exists at home. That is sad and not understanding that it is sad, is even more sad.
Sorry, what did the police do and how many people did they catch in how much time :/
deleted by creator
Your car also isn’t a private space when the kid you just hit goes through your windscreen because you were doing 10 over the limit while looking at your phone
Are parents letting kids play on motorways these days?
Doesn’t matter, you still can’t run them over
Yes I can, the AI is trained to detect users not wearing seatbelts and those being on their phones. Never has it seen my hydrogen powered snowplow-bearing vehicle that glides through street playing kids like a hot knife through butter at a summer bbq. Those photos shouldn’t get sent back for review.
deleted by creator
deleted by creator
Your car is a private space for the most part—put up those fuzzy dice even if they could block your view. U wanna turn your music so high you can’t hear a siren? Go for it—for the most part. Your driveway and any road you wanna put on your property is also a private space.
A shared road is not private. That’s the issue here. The question for regulators and governments is: How do you make sure everybody is reasonably safe without recalling billions of cars to have the “trivial” changes you proposed?
Sometimes that’s good old fashioned seat belts. Sometimes that’s Ai. Could be a speed bump. Privacy really doesn’t apply here—you’re in a public space.
If a shared road is not private how come radar detectors are usually illegal? You are in public picking up radio signals. Clearly there is no expectation of privacy when you are sending out radio waves in a shared area.
deleted by creator
Each individual driver is not entirelly self-contained in their motivations and the way they drive, with no influence from what happens to other drivers around them.
Privacy and road safety are definitelly linked, because there is a systemic effects to more effective enforcement of driving rules, both because drivers fear getting caught a lot more (because they know somebody who knows somebody who got caugh) and because people tend to drive like those around them (it’s one thing to be the rule breaker amongst widespread rule breaking, it’s another being the rule breaker amongst widespread rule compliance).
Now, privacy lowered for the sake of something other than enforcing the rules (or if there is an alternative way of achieving the same that does not break people’s privacy), that’s a whole different thing, but if the reason is to catch rule-breakers it will most definitelly lead to lives saved because there will be people who now drive within the rules because they fear getting caught who would otherwise have driven in ways that endanger others.
To quote you: “You need to give it a bit more thought.”
That ship has sailed a long time ago. Tons of cars on the roads now have built-in cameras. Cameras that are ultimately under centralized control of the car manufacturer.
And then there are all the outdoor cameras that homes and businesses have.
Using AI for this is so stupid. There will be a lot of false negatives and positives.
I think it’s a pretty good idea, the AI does a first pass, flags potential violations and sends them to a human for review. It’s not like they are just sending people fines directly based on the AI output.
I’m definitely a fan of better enforcement of traffic rules to improve safety, but using ML* systems here is fraught with issues. ML systems tend to learn the human biases that were present in their training data and continue to perpetuate them. I wouldn’t be shocked if these traffic systems, for example, disproportionately impact some racial groups. And if the ML system identifies those groups more frequently, even if the human review were unbiased (unlikely), the outcome would still be biased.
It’s important to see good data showing these systems are fair, before they are used in the wild. I wouldn’t support a system doing this until I was confident it was unbiased.
- it’s all machine learning - NOT artificial intelligence. No intelligence involved, just mathematical parameters “learned” by an algorithm and applied to new data.
I think the lack of transparency about data, both the one used for training, and the actual statistics of the model itself is pretty worrying.
There needs to be regulations around that, because you can’t expect companies to automatically be transparent and forthcoming if they have something to gain by not being so.
This is a really important concern, thanks for bringing it up. I’d really like to know more about what they are doing in this case to try and combat that. Law enforcement in particular feels like an application where managing bias is extremely important.
I would imagine the risk of bias here is much lower than, for example, the predictive policing systems that are already in use in US police departments. Or the bias involved in ML models for making credit decisions. 🙃
Machine learning is a type of artificial intelligence. As is a pathfinding algorithm in a game.
Neural networks were some of the original AI systems dating back decades. Machine learning is a relatively new term for it.
AI is an umbrella term for anything that mimics intelligence.
There’s nothing intelligent about it. It’s no smarter than a chatbot or a phone’s autocorrect. It’s a buzzword applied to it by tech bros that want to make a bunch of money off it
Indeed. That’s why it’s called artificial intelligence.
Anything that attempts to mimic intelligence is AI.
The field was established in the 50s.
Your definition of it is wrong I’m afraid.
The only people that call it that are people who don’t get what AI actually is or don’t want to know because they think it’s the future. There is exactly nothing intelligent about it. Stop spreading tech bro bullshit, call it machine learning bc that’s what it actually is. Or are you really drinking the ML kool-aid hard enough that this is your hill to die on? It’s not even as intelligent as a parrot that’s learned to recognize colors and materials, it’s literally just a souped up cleverbot
Literally the definition my friend. You just don’t know what the term is refering to.
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of human beings or animals. AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), and competing at the highest level in strategic games (such as chess and Go).[1]
Artificial intelligence was founded as an academic discipline in 1956.[2] The field went through multiple cycles of optimism[3][4] followed by disappointment and loss of funding,[5][6] but after 2012, when deep learning surpassed all previous AI techniques,[7] there was a vast increase in funding and interest.
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence (the ability to solve an arbitrary problem) is among the field’s long-term goals.[8] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience and many other fields.[9]
They will tho. In the future. And because its a camera (black and white) there will be many false positives. And this is what the normal driver should fear. That the police just say yeah everything fineeeee and let the ai loose, this is just the first step into. I really doubt it would RELIABLY detect seat belt offenses.
See this is why I only drive when I am drinking a 20oz coffee and eating a footlong sub. That way when AI acuses me of being distracted from being on the phone, the human it gets sent off to for review will be like “oh no, he was simply balancing a sandwich on his lap while he took the lid off to blow on his coffee so it wasn’t to hot, the AI must have thought the lid was a phone.”
Besides, it also ensures I use a handfree device for my phone because face it… I don’t have any free hands, I’m busy trying to find where that marinara sauce fell on my shirt when I was eating the last bite of meatball sub. (Add pepperoni and buffalo sauce) Have to stay legal after all.
We have a couple of these cameras in The Netherlands.
We found it quite intrusive to look into people’s cars. Therefore the computer will flag photos, of possible offenses, and a person verifies them.
Unfortunately the movable camera has a huge lens and it’s reported to a waze-like app before they are even finished setting it up.