

How is an AI agent any different than any other software just because it does inference with a LLM? If I order something from their website and I get overcharged due to a bug, are they also not responsible? It’s not like agents can’t be tested or like guardrails can’t be put into place.
I know as a software engineer, I’m responsible for the code in any PR that has my name on it, regardless of what tools I may have used to generate the code, including AI. Are their dev teams not responsible for making sure their shit works?




Sure, but AI engineers are well aware of that fact (or should be) and there are ways to limit the potential damage, like human in the middle, especially for purchases over a certain threshold. Overall, a system like this like this should never really be trusted to make purchases without the customer approving each purchase.
Then again, if you’re going to approve every purchase, I’m not sure how it really saves time. If it is purchasing without approval, the first time it buys something you didn’t want and you have to battle Target to get it refunded will negate any time savings. Largely seems like AI for the sake of AI.