Have you actually read the article? It isn’t anti-AI it’s actually very much pro-AI. All it says is that there are a lot of companies duping people at other companies (that use AI) to sell their shit.
He argues so called AI security companies sell solutions for problems that are inherent to the technology and thus can never be fixed. But by showing there is a problem and then offering the solution to that problem, people think they are actually fixing something. In reality it only fixes that one specific problem, but leaves open the almost infinite of other very similar issues.
His argument is to actually handle AI security by getting someone that really knows what is what (how one would get that person or distinguish them from bullshitters is a mystery to me). Some issues are just a part of the deal with AI, so they have to be accepted and managed where possible. Other issues should be handled upstream or downstream and he argues AI could be implemented on those parts as well.
I agree with his argument, it is total bullshit to show the flaws in LLM models and then claim to fix those with expensive software that doesn’t actually solve the core issue (because that is impossible). However in my experience this has always happened in the past more or less. I’m not sure it’s happening more now? Or because understanding of AI is so low usually, so it’s easier?
I did! I read all the articles I post. I don’t think I would characterize it as pro-AI either, but more as an even-handed take. He does reach a pro-AI synthesis, but I found the arguments that the existing AI security guardrails are ineffectual because AI cannot separate data from instruction, and that there is no way to deterministically test the entire solution space a pretty damning condemnation of the whole project, while reserving space for a future where we find a way to use LLMs effectively. And I think that’s ultimately where I fall — the most “fuck ai” thing that I feel is that there is a way we could implement the use of LLMs that respects intellectual property rights and is safe and secure and doesn’t burn the world, but that we’re not doing any of all that and burning the world instead. I mostly posted it here because I’m more interested in the skeptic’s take on the article than the glazer’s.
Good to hear it! I was afraid you’d just gone off the headline and not the contents, with the author being someone who works in the AI field and the article being pro-AI in my opinion. I apologize, you have obviously done your homework.
I agree, it’s fucking crazy what this dude says. He’s like sure LLMs are flawed to the bone, but you just have to accept that and work around it. Just build a fence around it, and you can even build that fence using AI! I mean WTF…
One of the reason I think the article is pro-AI is because of lines like this:
This is not at all an indictment of AI. AI is extremely useful and you/your company should use it.
The one thing that I might point out is that if you are a security researcher who is deeply skeptical of AI, and you want to have an impact on the current state of AI security practices, you are effectively required to include disclaimers like this. The pro-AI crowd definitely shut down immediately when you say, “I don’t think this is production-ready” and just shove it into prod regardless, so if you wanna say “well, could you at least be clear-eyed about your security posture,” you need to keep them reading.
Yeah that’s true. But this dude is known in the space as a AI lover tho. He runs an AI course website called “Learn Prompting”
Although i agree that the article isn’t really “anti-IA” it does show that products in this ecosystem are based on … what word should i use? deception?
If you wanted to be that kind of jerk, you know what’s easier than putting shit on the stop sign?
Cutting it down.
I mean, I get the worry and concern about AI, but some people are really grasping at straws here…


