It has to be pure ignorance.

I only have used my works stupid llm tool a few times (hey, I have to give it a chance and actually try it before I form opinions)

Holy shit it’s bad. Every single time I use it I waste hours. Even simple tasks, it gets details wrong. I correct it constantly. Then I come back a couple months later, open the same module to do the same task, it gets it wrong again.

These aren’t even tools. They’re just shit. An idiot intern is better.

Its so angering people think this trash is good. Get ready for a lot of buildings and bridges to collapse because of young engineers trusting a slop machine to be accurate on details. We will look back on this as the worst era in computing.

  • Hoimo@ani.social
    link
    fedilink
    arrow-up
    5
    ·
    15 days ago

    They may not care about the implementation details of a Python library, they do care about consistent execution and predictable results. And in some edge cases, they will care about the documentation saying exactly how those edge cases are handled.

    Writing Python is abstraction, yes, but it’s still programming. Once that Python code is written and tested and the dependencies are locked down, you can ship it and be certain it always works as designed.

    Spec-driven code generation is nothing like that. I can’t ship the specs. I could generate the code in a pipeline and ship that, maybe. But there’s no way I’m getting consistent builds from a code generator. So what do people do? They generate the code and put it in source control for review. When have you ever checked-in a compiled executable or looked at it? There’s machine code in there, shouldn’t you review that the compiler did what you asked of it?

    • Not_mikey@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      15 days ago

      Consistency is dependent on the code base and not the “compiler” in this sense. If the code base has consistent patterns and only has one well documented way to implement something then the AI will follow those patterns, ie. If there is only one way to run a job, AI will use that method. There might be some variation in variable names, formatting, etc. but the core flow should be consistent between “runs”

      You can and should still test your code , both manually and with automation to ensure it does what it says it does. Testing should be the way you are certain it always works as designed. IMO understanding your tests and test coverage is more important than understanding the implementation. This is why part of the spec for superpowers is a test plan, and that should be the most reviewed / iterated part.