• gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      Smaller changesets are not difficult to check directly.

      Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.

      • I_Jedi@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          I mean we’re basically talking about blocking lazily/incompetently-executed agentic edits. If a skilled dev uses an LLM as a reasonable baseline and then takes the time to go through the delta and to confirm and correct things, and then furthermore produces good commentary and discussion (as opposed to pointing your LLM at the PR with your creds and telling it to respond to comments), then I don’t think that’s a huge problem. That is, in fact, a reasonably responsible way to use LLMs for coding.

          The intent here is to limit the prevalence of LLM code spam, not to eliminate any usage whatsoever of LLMs, which isn’t really achievable (for instance, many people have their IDE’s intellisense connected to an LLM to make it suggest more interesting things - that’d be effectively impossible to block).