Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it’s a productive and thoughtful comment and not pure hate brigading.
Consider upvoting the issue to show community interest.
Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you’ll forgive me this slight tangent, but more eyes could benefit this one too.


We’re using the word the cartels pushing LLMs are using for their trash.
Don’t blame us for turning “AI” into a swearword, blame those bastards.
Once the bubble pops anyone working on any kind of AI will be seen as a liability and shunned by investors, setting research back for decades (not that any form of research would’ve been possible for long in the slop saturated world they’ll leave behind), all because those scammers thought selling their malware as “AI” would cause more marks to fall for it.
I do completely understand and I do agree that the current state of the LLM bubble is dangerous and many of the people involved know it and are just planning to make as much money as possible and get out before the bubble bursts. It’s capricious greed that will result in a lot of damage in the end and they should be called out for it.
I just think some people deviate from reality in their quest to fight these things. I don’t like being party to misinformation and lies, even if it’s from people I otherwise believe in.
It’s certainly not popular at times (I haven’t check my posts but I imagine I’ve accumulated some downvotes), but we’re all more effective in our advocacy if we don’t allow people to deliberately spread misinformation.
We don’t need it, we have facts and reality on our side… no need to create fiction.