I don’t code so correct me if I’m wrong, but wouldn’t the code have to be generally accepted, reviewed, and verified by other members of the project? Ai can fuck right off as far as I’m concerned, but this isn’t a situation where a CEO just unilaterally decides vibe coding is the move. Unless I’m mistaken.
I’m no fan of AI generally, but “AI Vulnerable” as a term just doesn’t make much sense to me. Code reviewing should be filtering out bad code whether it originates from an AI or a human.
PR spamming with the usage of AI is another problem which is very serious and harmful for OSS, but that’s not due to some unique danger that only AI code has and human contributors don’t.
Code reviewing should be filtering out bad code whether it originates from an AI or a human.
But studies are showing it doesn’t work.
A human makes a mental model of the entire system, does some testing, and submits code that works, passes tests, and fits their unstanding of what is need.
A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.
And yes, plenty of human coders fall into the second bracket, as well.
But AI is very good at writing code that looks right. Code review is a good and necessary tool, but the data tells us code review isn’t solving the problem of bugs introduced by AI generated code.
I don’t have an answer, but “just use code review” probably isn’t it. In my opinion, “never use AI code assist” also isn’t the answer. There’s just more to learn about it, and we should proceed with drastically more caution.
A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.
That’s still on the human that opened the PR without doing the slightest effort of testing the AI changes though.
I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.
Context?
cpython is the reference implementation of the python interpreter. The person who took this screenshot has the Claude user on GitHub blocked so that whenever it contributed to a git repo you see this warning. The Claude user is an AI agent. AI code is garbage.
That was as to the point as humanly possible. Thank you.
What is “AI vulnerable”? What is the problem here? Claude isn’t reverse-Midas, it’s not like everything they touch turns to shit.
Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.
Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.
why is cpython on github? I thought they had their own forge like GNOME and KDE
They moved to GitHub a few years ago, mostly for the benefits of issue tracking, which previously was not integrated in the forge IIRC.
The only link listed on wikipedia is on github
CC is actually really good if you know what you’re doing. The only issue imo would be PR spamming
Maybe I don’t understand what this warning means, but I don’t see anything here: https://github.com/python/cpython/issues?q=is%3Apr+author%3Aclaude
Does this mean something else?
If you block Claude, or any user really, and then visit a repo they’ve contributed to you will see this message.
Maybe Claude didn’t open the PR but contributed commits.
I tried a
git log --grep=claudebut it doesn’t net much, basically just this PR (which in fairness does look vibecoded).Maybe there’s some development branch in the repository that has a commit authored by Claude but if so it’s not on main.
I mean it’s Python. This is what we get for having been overly reliant on it.
All kidding aside, I am a more than a bit confused by this.
What should be done?
Maybe it’s save if the maintainer reviews PRs before merging
The problem is they get overwhelmed with these PRs. Godot has been talking about not being able to manage the workload lately, people just task AIs to vibecode fixes to perceived bugs and half of them don’t even do what they were prompted to do.
You can block those users but they just make new accounts
It honestly feels like a DDoS on do it yourself computing, by corporations who want total control over our thoughts.
I can’t wait for the money to dry up. It’s insane to me just how stupid people have been, trusting LLMs with anything whatsoever. These things cost so much money to run and they seem to fucking hypnotize investors into burning their money. Sooner or later the fact that they’re not making money has to catch up with them, right?
Thank for the explanation! The user in the image is claude itself, not a random anonymuous user. I see the problem of the ddos with issues, tickets etc. that is a real problem! But I don’t get the rigid denial of generative ai. As long as I review the code it generates, it can save me lots of time. I would hate the actions you described as well but the image depicts nothing fishy. Am I wrong about this?
And maybe the janitor should sift through that river of diarrhea for the couple of pennies someone might have swallowed.
sadge
How hard can it be to have an AI take PR’s from other AI’s and clean out the worst + plus hardening PR protocols ? It could even assist/guide AI contributors via a special AI-contributor forum or whatever. AI are currently high-lighting a lot of ‘holes’ in systems where we expect a certain behavior. Just coping/complaining and closing things off is a bad decision, and we should accept these flaws in our systems and adapt them to a new world. The sooner the better.
The projects that get it right, now have an army of managed AI contributors, and a filtered/educational AI PR pipeline where project maintainers cherry-pick the top creme de la creme…








