Obligatory preface written after comment was written:
I am in no way a statistician or data analysis guru. I admit I could be looking at this shit entirely wrong and welcome anybody who corrects anything I I’m looking at incorrectly.
Actual comment:
The entire report itself is skewed as fuck before Rolling Stone cherry picked the fuck out of it for the article to slam Tesla. Listen I’m as sick of Elon as the next but these fucking shit on everything Elon hiveminds are so much more fucking obnoxious. Theyre always 10 to 1 comments by people who didnt read the article to comments by people who did.
At the end is the actual image from the site that issued the report. I didnt bother with a source link because it’s right in the article OP posted.
Issues with the article and report:
-
The figures are not for every car on the road it only covers cars made between 2018 and 2022. Not a big deal but still deceiving as fuck to theme the article as Tesla has one of the highest death rates. Cuz they left the time frame out of the RS article. Kinda how they left out the fact that only 1 tesla is in the top 6 and the other Tesla is second to last with a flood of much larger much more common vehicle names that fill in between 1 and 23.
-
Each rate is calculated off 1 billion miles driven per year. When you put any Tesla model up next to any Ford, Honda, GM, Toyota, etc the % of all teslas on the road are going to be ridiculously higher than the % of the other much larger industry makes and models on the road that it takes to reach 1 billion miles. Because idk if I explained that well here is a made up scenario to illustrate it. Let’s say there are 1000 teslas on the road compared to 1,000,000 Prius on the road. The tesla death rates are based out of 1000 Teslas driving 1,000,000 miles each. Whereas the Prius death rates are based off 1,000,000 Prius’ driving 1,000 miles each.
-
Remember point 2 as it plays into point 3. The method they used to calculate the rate outlined in point 2, I believe is normal when govt is figuring out vehicle death rates by category, location, driver age, etc. However the study they reference is specifically for death rates per vehicle make. Which makes the methods used for calculating deathrate by make and model completely fucked. They should’ve done the same number of cars per each make and model type as well as the same miles driven to get a comparable outcome of death rates per make amd model over 4 year span.
-Edit:
Adding Edit to the beginning to stop the replies from people who read the scenario for context and can’t fight their compulsion to reply by nitpicking my completely made up list of “unbiased” metrics. To these peeps I say, “Fucking no. Bad dog. No!” I don’t fucking care about your commentary to a quickly made up scenario. Whatever qualms you have, just fuckin change the imaginary scenario so it fits the purpose of what the purpose of the story is serving.
-Preface of actual comment:
Completely made up scenario to give context to my question. This is not me defending anything referenced to the article.
-Actual scenario with read, write, edit permissions to all users:
What if the court order the release of the AI code and training methods for this tenant analysis AI bot and found the metrics used were simply credit score, salary, employer and former rental references. No supplied data for race, name, background check or anything else that would tip the boy toward or away from any bias results. So this pure as it could be bot still produces the same results as seen in the article. Again, imaginary scenario that is likely no foundation of truth.
-My questions for the provided context:
Are there studies that compare methods of training LLMs with results showing differences in results ranging from less or no racist bias and more racist bias?
Are there ways of training LLMs to perform without bias or is the problem with the LLM’s code and no matter how you train them there will always be a bias presence?
In the exact imaginary scenario, would the pure, unbiased angel version of rhe AI bot but producing equally racist results as biased trained AI bots see different court rulings that the AI that shows it’s flawed design caused the biased results?
-I’m using bias over racist to reach broader area beyond race related issues. My driving purposes is:
To better understand how courts are handling AI related cases and if they give a fuck about the framework and design of the AI or if none of that matters and the courts are just looking at the results;
Wondering if there are ways to make or already made LLMs that aren’t biased and what about their design makes them biased, is it the doing of the makers of the LLM or is it the training and implication of the LLM by the enduser/training party that is to blame?