• zaphod@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Writing code with an AI as an experienced software developer is like writing code by instructing a junior developer.

    • clif@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Without the payoff of the next generation of developers learning.

      Management: “Treat it like a junior dev”

      … So where are we going to get senior devs if we’re not training juniors?

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      … That keeps making the same mistakes over and over again because it never actually learns from what you try to teach it.

        • BradleyUffner@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Unless you are retraining the model locally at your 23 acre data center in your garage after every interaction, it’s still not learning anything. You are just dumping more data in to its temporary context.

          • plantfanatic@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            What part of customize did you not understand?

            And lots fit on personal computers dude, do you even know what different llms there are…?

            One for programming doesn’t need all the fluff of books and art, so now it’s a manageable size. Llms are customizable to any degree, use your own data library for the context data even!

            • BradleyUffner@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              What part about how LLMs actually work do you not understand?

              “Customizing” is just dumping more data in to it’s context. You can’t actually change the root behavior of an LLM without rebuilding it’s model.

              • SchmidtGenetics@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 month ago

                If it’s constantly making an error, fix the context data dude. What about it an llm/ai makes you think this isn’t possible…? Lmfao, you just want to bitch about ai, not comprehend how they work.

              • plantfanatic@sh.itjust.works
                link
                fedilink
                arrow-up
                0
                ·
                1 month ago

                “Customizing” is just dumping more data in to it’s context.

                Yes, which would fix the incorrect coding issues. It’s not an llm issue, it’s too much data. Or remove the context causing that issue. These require a little legwork and knowledge to make useful. Like anything else.

                You really don’t know how these work do you?

                • BradleyUffner@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 month ago

                  You do understand that the model weights and the context are not the same thing right? They operate completely differently and have different purposes.

                  Trying to change the model’s behavior using instructions in the context is going to fail. That’s like trying to change how a word processor works by typing in to the document. Sure, you can kind of get the formatting you want if you manhandle the data, but you haven’t changed how the application works.

                  • SchmidtGenetics@lemmy.world
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    1 month ago

                    Why are you so focused on just the training? The data is ALSO the issue.

                    Of course if you ignore one fix, that works, of course you can only cry it’s not fixable.

                    But it is.

      • VoterFrog@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        This is not really true.

        The way you teach an LLM, outside of training your own, is with rules files and MCP tools. Record your architectural constraints, favored dependencies, and style guide information in your rule files and the output you get is going to be vastly improved. Give the agent access to more information with MCP tools and it will make more informed decisions. Update them whenever you run into issues and the vast majority of your repeated problems will be resolved.