They probably weren’t inundated that badly until recently. There’s no point to automating low effort, low frequency process. It’s just that the frequency changed, and the noise factor exploded.
Smaller changesets are not difficult to check directly.
Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.
What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.
I mean we’re basically talking about blocking lazily/incompetently-executed agentic edits. If a skilled dev uses an LLM as a reasonable baseline and then takes the time to go through the delta and to confirm and correct things, and then furthermore produces good commentary and discussion (as opposed to pointing your LLM at the PR with your creds and telling it to respond to comments), then I don’t think that’s a huge problem. That is, in fact, a reasonably responsible way to use LLMs for coding.
The intent here is to limit the prevalence of LLM code spam, not to eliminate any usage whatsoever of LLMs, which isn’t really achievable (for instance, many people have their IDE’s intellisense connected to an LLM to make it suggest more interesting things - that’d be effectively impossible to block).
You can absolutely control who is allowed to make PRs on your repos. And it’d be easy to set up a process to confirm contributors are actually human
My question is if this is easy and possible why haven’t they done it? Seems a massive oversight. Maybe hit them up.
They probably weren’t inundated that badly until recently. There’s no point to automating low effort, low frequency process. It’s just that the frequency changed, and the noise factor exploded.
Insufficient. I know actual humans who use AI to write code.
What I mean is that you can change the code of conduct to say “vibe-coded submissions will get you a permaban”
How do you prove that something is vibe-coded?
Smaller changesets are not difficult to check directly.
Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.
What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.
I mean we’re basically talking about blocking lazily/incompetently-executed agentic edits. If a skilled dev uses an LLM as a reasonable baseline and then takes the time to go through the delta and to confirm and correct things, and then furthermore produces good commentary and discussion (as opposed to pointing your LLM at the PR with your creds and telling it to respond to comments), then I don’t think that’s a huge problem. That is, in fact, a reasonably responsible way to use LLMs for coding.
The intent here is to limit the prevalence of LLM code spam, not to eliminate any usage whatsoever of LLMs, which isn’t really achievable (for instance, many people have their IDE’s intellisense connected to an LLM to make it suggest more interesting things - that’d be effectively impossible to block).
Still a good start. Better than doing nothing.