• 1 Post
  • 11 Comments
Joined 2 years ago
cake
Cake day: December 9th, 2023

help-circle
  • As someone who regularly has to deal with code that has been broken needlessly into smaller functions so that I have to constantly jump around to figure out what is going on, this really resonates with me.

    The latest case was someone who took something that really only needed to be a single function and instead turned it into a class with a dozen tiny methods.


  • I was responding to the following paragraph in the article:

    We used to get proof-of-thought for free because producing a patch took real effort. Now that writing code is cheap, verification becomes the real proof-of-work. I mean proof of work in the original sense: effort that leaves a trail: careful reviews, assumption checks, simulations, stress tests, design notes, postmortems. That trail is hard to fake. [emphasis mine] In a world where AI says anything with confidence and the tone never changes, skepticism becomes the scarce resource.

    I am a bit wary that the trail of verification will continue to be so “hard to fake”.











  • The researchers in the academic field of machine learning who came up with LLMs are certainly aware of their limitations and are exploring other possibilities, but unfortunately what happened in industry is that people noticed that one particular approach was good enough to look impressive and then everyone jumped on that bandwagon.