• 0 Posts
  • 12 Comments
Joined 5 months ago
cake
Cake day: August 30th, 2025

help-circle

  • Good to hear it! I was afraid you’d just gone off the headline and not the contents, with the author being someone who works in the AI field and the article being pro-AI in my opinion. I apologize, you have obviously done your homework.

    I agree, it’s fucking crazy what this dude says. He’s like sure LLMs are flawed to the bone, but you just have to accept that and work around it. Just build a fence around it, and you can even build that fence using AI! I mean WTF…

    One of the reason I think the article is pro-AI is because of lines like this:

    This is not at all an indictment of AI. AI is extremely useful and you/your company should use it.


  • Have you actually read the article? It isn’t anti-AI it’s actually very much pro-AI. All it says is that there are a lot of companies duping people at other companies (that use AI) to sell their shit.

    He argues so called AI security companies sell solutions for problems that are inherent to the technology and thus can never be fixed. But by showing there is a problem and then offering the solution to that problem, people think they are actually fixing something. In reality it only fixes that one specific problem, but leaves open the almost infinite of other very similar issues.

    His argument is to actually handle AI security by getting someone that really knows what is what (how one would get that person or distinguish them from bullshitters is a mystery to me). Some issues are just a part of the deal with AI, so they have to be accepted and managed where possible. Other issues should be handled upstream or downstream and he argues AI could be implemented on those parts as well.

    I agree with his argument, it is total bullshit to show the flaws in LLM models and then claim to fix those with expensive software that doesn’t actually solve the core issue (because that is impossible). However in my experience this has always happened in the past more or less. I’m not sure it’s happening more now? Or because understanding of AI is so low usually, so it’s easier?



  • This reminds me of a thought experiment where a super human level AI who runs everything is itself actually run by humans. The humans just have a regular job, they wake up, do their human morning things and get to work. There they do stuff, maybe on a computer or maybe with paper or something, this doesn’t really matter. As long as the work is mysterious and important. Every day they do pretty much the exact same thing, just like most jobs. Then at the end of the day they go home and live a regular normal life. The idea is what these humans do is how the machine AI actually works or “thinks”. The humans don’t know what exactly they are doing, as each task is only a very small part of a greater whole. So they don’t control it or influence it exactly. The work is done by enough people to make the AI smart and fast enough to be useful. The AI needs the humans to work and in turn the AI runs everything for the humans, so they need each other.






  • Wtf is this headline? The money the NHS saves is the important part? Why is that even mentioned, sure it’s a useful side effect perhaps. But even if it costs more money, isn’t reducing heart attacks and strokes the important part?

    Also “Without the public having to change eating habits” is BS. If you reduce the salt, by definition you are changing the eating habits. And in my experience, food with less salt tastes like shit. In the EU the amount of salt on crisps has been reduced. Which is a good thing for health reasons, however all crisps tastes like cardboard now. My favorite snack have been ruined. I still buy the crisps to enjoy on a Saturday evening with some beer and a movie, but when I eat then I regret it instantly. I know it’s not healthy, neither the the alcohol nor the crisps, but can it at least taste good if it kills me?




  • I’m pretty sure it isn’t that easy to do what that dude did. It’s a multi step process. It doesn’t say: “This will delete your data, are you sure you want to continue?”, but it also isn’t like he clicked on the x top right and all of the data was gone. The language of the function is also pretty clear and there are a lot of ways to find out what it does. The dude even admits himself he wanted to know if he could toggle that and still have access to his data, but instead of asking the chatbot beforehand he just tried it and then cried foul when it actually locked him out.


  • I’m sorry but WHAT? how do you work on stuff for 2 years and have NO BACKUPS? Like dude WTF. I have backups of backups, I have version histories of everything I do. I have physical backups, cloud backups, off site backups, you name it. If I put effort into creating something, it’s worth putting in effort to keep it safe.

    This dude was outsourcing his brain to a dumb chat service and lost the ability to think. His brain was so fried he actually tried the feature that said this will lock you out of your data, just to see if he actually got locked out.

    Not the mention the dude worked on grants, papers and as a professor. And now the bullshit generator has tainted all of that. Imagine getting a huge student loan to go to get a good education and your professor just outsourced it to fucking ChatGPT. I would be hella mad. Not to mention things like grants and early research is often covered by an NDA and my man just uploaded all of that to some shitty US company.