• 0 Posts
  • 8 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle







  • This person is right. But I think the methods we use to train them are what’s fundamentally wrong. Brute-force learning? Randomised datasets past the coherence/comprehension threshold? And the rationale is that this is done for the sake of optimisation and the name of efficiency? I can see that overfitting is a problem, but did anyone look hard enough at this problem? Or did someone just jump a fence at the time and then everyone decided to follow along and roll with it because it “worked” and it somehow became the golden standard that nobody can question at this point?