The most common thing computers do is break, and being forthcoming and transparent about that reality while not making your platform sound like an incoherent pile of bricks teetering on a cliff above a playground is a delicate balancing act. AWS’s reliability is the stuff of legend, and on the rare occasion that it fails, they walk the messaging tightrope very well. So I was surprised to learn all you have to do to sweep away twenty years of excellence and make them sound like frothing insecure zealots is sprinkle a bit of “perhaps AWS is bad at AI” narrative on it. Then, they lose their minds.
It’s kind of sad, I’ve seen many top solution architecture and engineers leave in the past month. Some of them have 10 years tenure and build a repetition. I don’t think aws and Microsoft will recover, when the the bubble burst.
What was the point of not trying to unionize and stop it? Y’all are going to lose your jobs anyways when the company collapses.
They should do that, but given that their union busting practices. Personally I’ve never went to AWS or Microsoft way too toxic for me.
Yeah but if everyone is going to lose their jobs because the company is inevitably run into the ground by braindead executives who have no idea how to make the operation run smoothly or efficiently…?
Then what did everyone have to lose that they were so afraid of? They were already going to lose it. Not just some of it, all of it in a great terrible crash…
To know why they care more about AI’s reputation than their engineers… firing engineers makes stock go up and bad news on their AI makes stock go down.
Exactly
This isn’t a coverup; it’s a massive insecurity that’s extremely cringey to witness. AWS would rather have the world believe their engineers are incompetent than admit their artificial intelligence made a mistake. That’s not just a messaging choice. That’s a company so desperate not to look behind in the AI race that they’d torch their own employees’ reputations to protect their robot’s feelings. What does it say about AWS’s strategic position that defending the AI’s reputation takes priority over protecting their humans? When did “don’t hurt the algorithm’s feelings” become corporate policy?
[…]
Things break. Code has bugs. AI will make mistakes. This is the natural order of building complex systems, and anyone who’s been in this business longer than a funding cycle understands that. The problem isn’t that Kiro decided production was due for a surprise deletion. The problem is that when faced with their first major AI failure, AWS’s instinct wasn’t transparency or accountability. It was to protect the AI’s reputation at all costs.
If your cloud provider would rather look incompetent than admit its AI is fallible, sit with that for a second. Not because this particular outage was the end of the world. It wasn’t. It’s Cost Explorer, for God’s sake; I spend meaningful chunks of my life with that service, and it being down for a few hours just means I’ll do something else for a bit. But we are at the exact moment where every cloud vendor is asking you to hand agentic AI the keys to your production environment. When the first real test case showed up, AWS’s communications instinct was to protect the robot and throw the human under the bus.
The robot boobies in that pic … 🤦♀️
The tiny hand reaching several meters.
🫨
This is what peak performance looks like
deleted by creator
LLMs cannot fail, they can only be failed.
I have seen this quote before, I don’t fully get what its conveying
It’s a way to gloss over or redirect flaws. Apparently, it’s a super political term from the search results I get when trying to find references to where the construct came from.
In the context of e.g. an authoritarian country, the leader is infallible, so therefore any problems the citizens experience must be because the people under the leader failed to properly execute the leaders vision. It can’t be that the leader’s vision was just wrong.







