I’m extremely confused. Why is he checking the time every hour 14 times a day? I understand he’s trying to test AI out so he’s doing something trivial, but I feel like I’m having an aneurism reading this. This is still not an optimal way to do reminders. Am I just really dumb or is this nonsense?
It feels like an aneurysm because he didn’t write the post (at least entirety). Notice how it’s structured in LLMese with the 3 point bullet lists, “headings”.
You accurately described what I was feeling too. Like, it was so atrocious to read, because I wanted to figure out what he was talking about and understand it, when I knew that it didn’t make sense and he was an idiot. But I kept rereading it because I wanted to be wrong and realize what he really meant.
But no, he’s just a fucking idiot. He’s on twatter, so it makes sense…
He told it to remind him to get milk the next day. The artificial stupidity set up a cron job to check if it was “tomorrow” every so often before it reminded him. He’s a moron for paying for a completely wasteful stupid system that wasted his money.
I wonder if it at least reminded him correctly. Or did it check whether it’s “tomorrow” already, found out it’s still “today” and decided not to remind
I understand that, but he says he made some adjustments and after those it’s still checking 14 times a day? He seems satisfied with that outcome and I am just not sure if I’m misinformed or if there’s a reason that after the improvements it’s still requiring all those checks. It seems like the initial outcome was stupid, but I don’t understand why his improved outcome is viewed as an acceptable way to accomplish that task.
He is so far down his psychotic brain death he doesn’t even recognise the ridiculousness of his solution. That’s why he is satisfied, even though for you it’s obvious there are better solutions that don’t involve an LLM.
He seems to frame it as such. He notes that he’s learned lessons and seems to show it as a before/after in the table. Presumably if it wasn’t satisfactory he would not have stopped improving it there.
I’m extremely confused. Why is he checking the time every hour 14 times a day? I understand he’s trying to test AI out so he’s doing something trivial, but I feel like I’m having an aneurism reading this. This is still not an optimal way to do reminders. Am I just really dumb or is this nonsense?
It feels like an aneurysm because he didn’t write the post (at least entirety). Notice how it’s structured in LLMese with the 3 point bullet lists, “headings”.
You accurately described what I was feeling too. Like, it was so atrocious to read, because I wanted to figure out what he was talking about and understand it, when I knew that it didn’t make sense and he was an idiot. But I kept rereading it because I wanted to be wrong and realize what he really meant.
But no, he’s just a fucking idiot. He’s on twatter, so it makes sense…
He told it to remind him to get milk the next day. The artificial stupidity set up a cron job to check if it was “tomorrow” every so often before it reminded him. He’s a moron for paying for a completely wasteful stupid system that wasted his money.
A fool and his money are soon parted
I wonder if it at least reminded him correctly. Or did it check whether it’s “tomorrow” already, found out it’s still “today” and decided not to remind
The money probably ran out before then.
Oh, I understand that, but then if you look at the table, he says he improved it and it’s still checking 14 hours a day.
If I were to take a drill to a leaky boat hull and claimed to have “improved it,” would it sink any slower?
Fair point. Thank you for the chuckle.
Open claw is an agenic AI agent that interfaces with LLMs to do stuff like this. Apparently in the dumbest way possible.
I understand that, but he says he made some adjustments and after those it’s still checking 14 times a day? He seems satisfied with that outcome and I am just not sure if I’m misinformed or if there’s a reason that after the improvements it’s still requiring all those checks. It seems like the initial outcome was stupid, but I don’t understand why his improved outcome is viewed as an acceptable way to accomplish that task.
He is so far down his psychotic brain death he doesn’t even recognise the ridiculousness of his solution. That’s why he is satisfied, even though for you it’s obvious there are better solutions that don’t involve an LLM.
Who is suggesting that it’s a reasonable solution?
He seems to frame it as such. He notes that he’s learned lessons and seems to show it as a before/after in the table. Presumably if it wasn’t satisfactory he would not have stopped improving it there.