“If you want to help, more funding so we can pay more maintainers to deal with the slop (on top of everything we do already) is the only viable solution I can think of,” wrote Verschelde
What about moving the hosting to a self-hosted Gitea behind Anubis or something? Would that work?
Edit: we should all still be donating if we use the software Godot is great
Is that possible on Github? Wouldn’t this rely on the bots identifying themselves as such? The human slop submissions makes sense, I think a little harshness is required for the time being and maybe the human bans can be lifted later.
They probably weren’t inundated that badly until recently. There’s no point to automating low effort, low frequency process. It’s just that the frequency changed, and the noise factor exploded.
Smaller changesets are not difficult to check directly.
Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.
What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.
We should tax corporations and use that to fund FOSS. It’s ridiculous how much of modern tech is built on the work of FOSS maintainers without the corporations paying back to it.
What about moving the hosting to a self-hosted Gitea behind Anubis or something? Would that work?
Edit: we should all still be donating if we use the software Godot is great
Or how about only allow human verified accounts to post PRs? And make submissions of AI slop from human-verified accounts a permaban?
Is that possible on Github? Wouldn’t this rely on the bots identifying themselves as such? The human slop submissions makes sense, I think a little harshness is required for the time being and maybe the human bans can be lifted later.
You can absolutely control who is allowed to make PRs on your repos. And it’d be easy to set up a process to confirm contributors are actually human
My question is if this is easy and possible why haven’t they done it? Seems a massive oversight. Maybe hit them up.
They probably weren’t inundated that badly until recently. There’s no point to automating low effort, low frequency process. It’s just that the frequency changed, and the noise factor exploded.
Insufficient. I know actual humans who use AI to write code.
What I mean is that you can change the code of conduct to say “vibe-coded submissions will get you a permaban”
How do you prove that something is vibe-coded?
Smaller changesets are not difficult to check directly.
Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.
What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.
Still a good start. Better than doing nothing.
We should tax corporations and use that to fund FOSS. It’s ridiculous how much of modern tech is built on the work of FOSS maintainers without the corporations paying back to it.