• Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Ai written code is not copyright-able. I wonder if that is connected to this.

    And given that ai generated content (at least used to) poisons generative AIs… and open source is used to trade AIs…

  • StarryPhoenix97@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    I don’t have a problem with AI assisting with open source projects. On its face, it could be helpful to clean up some basic coding problems so a person with skill can come in and update later or remove it if it’s truly awful code. But then I remember that there’s always an angle. On top of all the other issues with AI coding, what happens if Anthropic tries to pull some legal shenanigans and say that they wrote most of the code, so they own the project? What if they are writing in backdoors and vulnerabilities?

    Like I said, on its face it sounds okay, but any time a corporation tries to touch a public project, things go wonky.

    • faintwhenfree@lemmus.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Bigger problem is AI writes so much code and adds so many features that are jank, if the commits are accepted the whole project risks being like a jank. I doubt anthropic can claim open source project is their work.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    5
    ·
    3 days ago

    It’s a great way to get free training for their next model, courtesy of unwitting OSS reviewers.

    Spam all the open source projects with slop, mark which ones get rejected and which ones get accepted, and bam there’s some new training data for Claude Villanelle, and the only time they’ve wasted is other people’s.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      I’ve been pondering why all the FOSS PR slop for ages, this HAS to be it.

  • aliser@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    put some prompt hijacking stuff into your contributing guide so that the slop generators identify themselves, then just ban them. or even better, make some kind of publicly available list of those accounts or EVEN better, a browser extension. fuck ai

  • redsand@infosec.pub
    link
    fedilink
    arrow-up
    24
    ·
    3 days ago

    They’re fluffing their résumé before the bubble pops. Don’t hire these clowns, interview them and ask about their code.

    • StarryPhoenix97@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      Oh, I didn’t even consider that. Like using open source code to train their program and refine it’s coding capabilities.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      5
      ·
      3 days ago

      For sure they know they shouldn’t be doing it, otherwise they wouldn’t be trying to hide it.

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        2 days ago

        They hide it because of prejudice. See this community for inexhaustible examples as to why hiding accreditation of the tools used to perform a task is necessary.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          awww those poor oppressed AI bros, nobody understands them, we need to get them classified as a protected group ASAP, maybe run an AI Pride parade to encourage them to come out of hiding and admit to who they truly are and live free of prejudice

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          They hide it because they’re saboteurs and are intentionally ruining open source projects to protect their own market share. If every open source project is ruined by slop, there will be no choice but to use closed source proprietary software. They’re the enemy.

          • BJW@lemmus.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            That’s an interesting theory. I don’t think it’s right, particularly because the motives make zero sense, but it’s interesting nonetheless. In the same way that ‘lizard people are controlling humanity from subterranean bunkers’, ‘the Earth is flat’ and ‘birds aren’t real’ are all interesting theories.

            Maybe I’m just missing the facts… What closed source, proprietary software does Anthropic have for sale, that there is an open source alternative for?

            • Hawke@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              1 day ago

              I don’t think it’s a coherent goal of torpedoing a direct competitor but the way the sloppers seem to do things is a lot of “just trust me bro, use AI for everything”. No actual thought given to anything, especially guidance in how to go about using it.

              In that regard, any decently-engineered software is competition to them. And therefore “break everything so LLMs are the only tool” seems to be the strategy they’ve chosen.

              The other less-malicious possibility is that they use those tools and want to submit patches, but of course they’re eating their own dog shit so no one wants it. So they try to hide it.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          2 days ago

          And see you for an example of the precise degree of arrogant shitfuckery that makes people hate LLMbecile slopmongers.

          If people don’t want your slop, you don’t fucking give them your slop you ignorant fucking cunt!!!

          Move on to where your shit is welcome.

          • BJW@lemmus.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 day ago

            That was quite the temper tantrum. Feel better, ya luddite?

            Move on, back to your cave without electricity, where the scary Internet and it’s technologies can’t hurt you. Your closed mind is welcomed and can even be celebrated there, alone, in the dark, by you.

    • BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      2 days ago

      Don’t forget the slats and the slots! We’re really clever here, so we have to use lots of slang to show how much hate we have for technology, and I’m worried you didn’t use enough of our childish monikers. Use more, or else people might think you actually support technological advances.

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 days ago

          It must be slang since it’s not literal. It’s like referring to every dog as a bitch. Yes, many dogs are bitches, but if you call every single dog a bitch then you’re using the term as slang, not by it’s definition.

          • Carrot@lemmy.today
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            In what way is it not literal? I’ve had to work closely with AI in a software engineering capacity, and if you let the AI do anything unchecked for long it will spit out slop. Slop by Merriam-Webster definition, is “a product of little or no value” or “digital content of low quality that is produced usually in quantity by means of artificial intelligence” which sounds exactly like an unchecked claude bot randomly submitting PRs to open source projects would be producing.

            • BJW@lemmus.org
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 days ago

              Because it’s not 100%. Just like all insects are bugs, but not all bugs are insects. See prior example.

              I’ve modified your reply in hopes you can discern what I’m saying: “I’ve had to work closely with a lot of dogs in a veterinarian capacity, and if you let dogs come in unchecked for long, one of them will be a bitch. Bitch by Merriam-Webster definition, is “a female dog” which sounds exactly like what a dog with a vagina having sex with other dogs would be producing.”

              If you use a word that applies to a portion of something as a word to describe all of the things, then you’re using slang.

              • Carrot@lemmy.today
                link
                fedilink
                arrow-up
                2
                ·
                2 days ago

                Alright, so you’ve changed your argument. Slang does not mean “an exaggeration.” I’m not trying to be the word police, it’s just hard to understand your argument if you aren’t using common definitions of words. You are arguing that the other person exaggerated, not that they are using slang. And to some degree, I agree with that.

                I will also admit that my last response missed a key point. Open source projects have been struggling to keep up with garbage PRs that started once AI bots have had the ability to submit PRs themselves. Hell, it started before that, when people still had to submit their AI generated PRs. On the whole, PRs across projects have been more slop than before LLMs existed. In your dog example, it’d be as though a breeder specializing in breeding female dogs moved in and business was booming. Yes, not every dog coming into the vet is a bitch, but there are significantly more bitches coming in than before.

                Ultimately, as they are right now, the net result of AI PRs is negative, so anyone choosing to release them is actively increasing the level of slop, even if a few of their changes aren’t themselves causing an issue. I’m not saying the people submitting them are malicious, just that they are contributing to an overall worse open source scene.

                • BJW@lemmus.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  2 days ago

                  I’ll be honest, I’m tired of every single thing created with the help of AI being automatically denigrated as “slop” so I’m just pushing back on the term in general. I’ve seen too many indie games review bombed on Steam merely for being accused of having used AI, despite the creators assertion they did not. Everyone calling everything “slop” feels remarkably like MAGATs in the USA calling everything they don’t like “woke” and it’s an aggravating trend to me.

                  Having said that, bogus PR created by AI without any review by a person prior submission is indeed slop. You’d think there would be an easy way to circumvent this, though. Such as by having a minimum threshold of reputation for a submitter before a PR is allowed. Even if an AI generates something, some person is putting their reputation on the line by claiming credit for it, no?

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          2 days ago

          Nooooo, say it ain’t so! Without clever nicknames how will we make our superiority known to those who adapt to new technology instead of ridicule it? Those slobs need to know we oppose the slop they slurped on their slots at the slats! It’s the only way they’ll know we’re superior to them 😭

  • tristan@tarte.nuage-libre.fr
    link
    fedilink
    Français
    arrow-up
    56
    arrow-down
    7
    ·
    3 days ago

    PSA: Prompting an LLM at length about what not to do is the best way to prime it to do that very thing. You’re loading a lot of tokens in memory and expecting a single “not” to do all the heavy lifting.

    This is adjacent to ironic process theory.

    • ThisLucidLens@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      3 days ago

      Is this necessarily true? I remember seeing an article a while back suggesting that prompting “do not hallucinate” is enough to meaningfully reduce the risk of hallucinations in the output.

      From my fairly superficial understanding of how LLMs work, “don’t do X” will plot a completely different vector for the “X” semantic dimension than prompting “do X”. This is different to telling a human, for example, to not think about elephants (congratulations, you’re now thinking about elephants. Aren’t they cute. Look at that little trunk and smiley mouth)

      • tristan@tarte.nuage-libre.fr
        link
        fedilink
        Français
        arrow-up
        6
        ·
        2 days ago

        Thank you for your reply. I realised I don’t have enough deep knowledge about LLMs apart from empirical experience from working with it to confidently answer your question. It would be interesting to find (or create if it doesn’t exist) more research on the subject.

    • Kogasa@programming.dev
      link
      fedilink
      arrow-up
      10
      arrow-down
      3
      ·
      3 days ago

      It’s possible that whatever prompt enhancement and processing happens around the LLM part of the application addresses this somewhat.

  • eestileib@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    35
    arrow-down
    9
    ·
    3 days ago

    One of my loved ones is defending this and I am having a moral crisis over my relationship with her because of that.

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        2 days ago

        Good advice, Shill Bot! But you should have specified they use Nvidia hardware to make it an effective shill. What if they use ATi? How will that help your owners turn a profit? Silly shill bot.

        • jj4211@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          5
          ·
          3 days ago

          Yeah, it is hard grasping why online commenters that are fans are fans, but in my real world interactions, I get a better feel for it.

          The people that are all in on the AI, slop and all, are the people I really found annoying to begin with. They tend to think everyone is desperate to hear what they say, that verbosity is king, and generally don’t really know what they are talking about. They are the sort that would spend a ton of time fretting over some ‘design document’ that when finally shared is absolutely nothing actionable, despite 10 pages with of gorp. Any specific outcome has nothing to do with the document, but they’ll take credit for “thought leadership” if it works, and blame the “inadequate team” if it fails. They are used to and cherish verbose yes men and are used to making vague statements and getting results they can’t judge already.

          Or on the other end, people who endlessly fell for clickbait. Slop before AI was really a factor in slop. People forwarding those chain letters back in the day.

          The people I have held long respect for tend to range between “too annoying to even deal with” to “it’s a little useful in key circumstances”. I have yet to personally meet someone I had long respected who went all in on AI.

          The insidious thing is I’m pretty sure they both outnumber and tend to have more power. Those folks who “thought lead” without actionable direction nor even a vague understanding of how the work happens? Those are the ones that got promoted, with the good ones largely overlooked for promotions, mainly because at a certain point promotion requires “professional networking” and making the executives happier with themselves than it is about good work. Now we are in a position where those people who never “got” the work are telling themselves that the LLMs can replace those annoying “nerds” that have leverage over them, and if there’s one thing they can’t stand, it’s having people they don’t understand having anything looking like leverage over them.

    • BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      2 days ago

      Quick, call her a slob that slops on her slot at slats! Then she’ll know you’re a true member of the erudite luddites.

      • eestileib@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        2 days ago

        I did tell her “I don’t enjoy having my ass kissed by a machine” and that had approximately the effect you’re looking for.

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          2 days ago

          Some are actually pretty good at it. Have you tried the Lovense models? They’ve really got the feeling of a tongue down.

  • TheDoctorDonna@piefed.ca
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    3 days ago

    The company I work for keeps trying to push Claude on us, even is company “social” situations. I never bothered to sign up for an account back when we were prompted so I guess I miss out…oh no?

    No, wait - the opposite of oh no.