I did! I read all the articles I post. I don’t think I would characterize it as pro-AI either, but more as an even-handed take. He does reach a pro-AI synthesis, but I found the arguments that the existing AI security guardrails are ineffectual because AI cannot separate data from instruction, and that there is no way to deterministically test the entire solution space a pretty damning condemnation of the whole project, while reserving space for a future where we find a way to use LLMs effectively. And I think that’s ultimately where I fall — the most “fuck ai” thing that I feel is that there is a way we could implement the use of LLMs that respects intellectual property rights and is safe and secure and doesn’t burn the world, but that we’re not doing any of all that and burning the world instead. I mostly posted it here because I’m more interested in the skeptic’s take on the article than the glazer’s.
Good to hear it! I was afraid you’d just gone off the headline and not the contents, with the author being someone who works in the AI field and the article being pro-AI in my opinion. I apologize, you have obviously done your homework.
I agree, it’s fucking crazy what this dude says. He’s like sure LLMs are flawed to the bone, but you just have to accept that and work around it. Just build a fence around it, and you can even build that fence using AI! I mean WTF…
One of the reason I think the article is pro-AI is because of lines like this:
This is not at all an indictment of AI. AI is extremely useful and you/your company should use it.
The one thing that I might point out is that if you are a security researcher who is deeply skeptical of AI, and you want to have an impact on the current state of AI security practices, you are effectively required to include disclaimers like this. The pro-AI crowd definitely shut down immediately when you say, “I don’t think this is production-ready” and just shove it into prod regardless, so if you wanna say “well, could you at least be clear-eyed about your security posture,” you need to keep them reading.
I did! I read all the articles I post. I don’t think I would characterize it as pro-AI either, but more as an even-handed take. He does reach a pro-AI synthesis, but I found the arguments that the existing AI security guardrails are ineffectual because AI cannot separate data from instruction, and that there is no way to deterministically test the entire solution space a pretty damning condemnation of the whole project, while reserving space for a future where we find a way to use LLMs effectively. And I think that’s ultimately where I fall — the most “fuck ai” thing that I feel is that there is a way we could implement the use of LLMs that respects intellectual property rights and is safe and secure and doesn’t burn the world, but that we’re not doing any of all that and burning the world instead. I mostly posted it here because I’m more interested in the skeptic’s take on the article than the glazer’s.
Good to hear it! I was afraid you’d just gone off the headline and not the contents, with the author being someone who works in the AI field and the article being pro-AI in my opinion. I apologize, you have obviously done your homework.
I agree, it’s fucking crazy what this dude says. He’s like sure LLMs are flawed to the bone, but you just have to accept that and work around it. Just build a fence around it, and you can even build that fence using AI! I mean WTF…
One of the reason I think the article is pro-AI is because of lines like this:
The one thing that I might point out is that if you are a security researcher who is deeply skeptical of AI, and you want to have an impact on the current state of AI security practices, you are effectively required to include disclaimers like this. The pro-AI crowd definitely shut down immediately when you say, “I don’t think this is production-ready” and just shove it into prod regardless, so if you wanna say “well, could you at least be clear-eyed about your security posture,” you need to keep them reading.
Yeah that’s true. But this dude is known in the space as a AI lover tho. He runs an AI course website called “Learn Prompting”