• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    You can absolutely control who is allowed to make PRs on your repos. And it’d be easy to set up a process to confirm contributors are actually human

    • Sanctus@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      My question is if this is easy and possible why haven’t they done it? Seems a massive oversight. Maybe hit them up.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 days ago

        They probably weren’t inundated that badly until recently. There’s no point to automating low effort, low frequency process. It’s just that the frequency changed, and the noise factor exploded.

          • gravitas_deficiency@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            Smaller changesets are not difficult to check directly.

            Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.

            • I_Jedi@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.

              • gravitas_deficiency@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                ·
                5 days ago

                I mean we’re basically talking about blocking lazily/incompetently-executed agentic edits. If a skilled dev uses an LLM as a reasonable baseline and then takes the time to go through the delta and to confirm and correct things, and then furthermore produces good commentary and discussion (as opposed to pointing your LLM at the PR with your creds and telling it to respond to comments), then I don’t think that’s a huge problem. That is, in fact, a reasonably responsible way to use LLMs for coding.

                The intent here is to limit the prevalence of LLM code spam, not to eliminate any usage whatsoever of LLMs, which isn’t really achievable (for instance, many people have their IDE’s intellisense connected to an LLM to make it suggest more interesting things - that’d be effectively impossible to block).