• dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    arrow-up
    62
    ·
    edit-2
    2 days ago

    Just yesterday I caught a client emailing me an entire RFQ that he generated with a fucking LLM. I know this because he didn’t read it, and was too stupid to strip off the inevitable “sure, I can help you with that” at the top.

    …His dumbass chatbot asked for not one, but two sizes of product that do not exist, while blithely assuring the reader that they do. Complete with a purported citation, the link for which I did not get because it pasted everything as plain text, but I can at least see that the text named the website of one of our competitors which at this point is probably also a “source” that was generated by an LLM.

    It also suggested he ask about a specific product which just so happened to be one that did not meet actual original criteria, which was specified on the line directly preceding it.

    If I’d have let my chatbot respond to this moron’s chatbot and he went through with the sale he surely would have lost a few thousand dollars. Luckily for him I don’t use LLMs or generative AI in any capacity and I actually read my emails, so I caught this instantly and told him (in corpo-speak, more or less) not to be an idiot.