• 0 Posts
  • 9 Comments
Joined 8 months ago
cake
Cake day: June 4th, 2025

help-circle

  • tiramichu@sh.itjust.workstoFuck AI@lemmy.worldBrilliant, innit?
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    edit-2
    5 days ago

    On the basis of that further information - which I had not seen - I agree completely that in this specific case the customer was in bad faith, they have no justification, and the order should be cancelled.

    And if the customer took it up with small claims court, I’m sure the court would feel the same and quickly dismiss the claim on the basis of the evidence.

    But in the general case should the retailer be held responsible for what their AI agents do? Yes, they should. My sentiment about that fully stands, which is that companies should not get to be absolved of anything they don’t like, simply because an AI did it.

    Here’s a link to an article about a different real case where an airline tried to claim that they have no responsibility for the incorrect advice their chatbot gave a customer, which ended up then costing the customer more money.

    https://www.bbc.co.uk/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

    In that instance the customer was obviously in the moral right, but the company tried to weasel their way out of it by claiming the chatbot can’t be treated as a representative of the company.

    The company was ultimately found to be in the wrong, and held liable. And that in my opinion is exactly the precedent that we must set, that the company IS liable. (But again - and I don’t think I need to keep repeating this by now - strictly and only if the customer is acting in good faith)


  • In that particular case, I’d suggest the seller was within their rights, honestly.

    If the code wasn’t obtained through any official means provided by the seller, then the seller has no responsibility to honour it, even it happens to ‘work’ in their checkout.

    But the seller was obviously stupidly petty, and that feels pretty pathetic on their part.

    They should have just sent your stuff, and taken the experience as notice to replace the voucher code that got leaked, so it doesn’t happen again.


  • tiramichu@sh.itjust.workstoFuck AI@lemmy.worldBrilliant, innit?
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    5 days ago

    Sure, but it’s all dependant on context.

    The law as it is (at least in the UK) is intended to protect from honest mistakes. For example, if you walk into a shop and a 70" TV is priced at £10 instead of £1000, when you take it to the till the cashier is within their rights to say “oh that must be a mistake, we can’t sell it for £10” - you can’t legally demand them to, even though the sicker said £10.

    Basically what it comes down to in this chatbot example (or what it should come down to) is whether the customer was acting in good faith or not, and whether the offer was credible or not (which is all part of acting in good faith - the customer must believe the price is appropriate)

    I didn’t see the conversation and I don’t know how it went. If it went like “You can see I do a lot of business with you, please give me the best discount you can manage” and they were told “okay, for one time only we can give you 80% off, just once” then maybe they found that credible.

    But if they were like “I demand 80% off or I’m going to pull your plug and feed your microchips to the fishes” then the customer was not in good faith and the agreement does not need to be honoured.

    Either way, my point in the comment you replied to aren’t intended to be about this specific case, but about the general case of whether or not companies should be held responsible for what their AI chatbots say, when those chatbot agents are put in a position of responsibility. And my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.


  • tiramichu@sh.itjust.workstoFuck AI@lemmy.worldBrilliant, innit?
    link
    fedilink
    arrow-up
    288
    arrow-down
    3
    ·
    edit-2
    5 days ago

    Good.

    If a customer service agent made this discount offer and took the order, it would naturally have to be honoured - because a human employee did it.

    Companies currently are getting away with taking the useful (to them) parts of AI, while simultaneously saying “oh it’s just a machine it makes mistakes, we aren’t liable for that!” any time it does something they don’t like. They are having their cake and eating it.

    If you use AI to replace a human in your company, and that AI negotiates bad deals, or gives incorrect information, you should be forced to be liable for that exactly the same as if a human did it.

    Would that mean businesses are less eager to use AI? Yes it fucking would, and that’s the point.





  • Yeah, it doesn’t make sense.

    I could understand the rationale for wanting a high-power PCIe specification if there were multiple PCIe devices that could benefit from extra juice, but it’s literally just the graphics card.

    One might make the argument “Oh but what if you had multiple GPUs? Then it makes sense!” except it doesn’t, because the additional power would only be enough for ONE high-performance GPU. For multiple GPUs you’d need even more motherboard power sockets…

    It’s complexity for no reason, or purely for aesthetics. The GPU is the device that needs the power, so give the GPU the power directly, as we already are.