You seem pretty confident in your position. Do you mind sharing where this confidence comes from?
Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?
I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.
Well that’s simple, they’re Christians - they think human beings are given souls by Yahweh, and that’s where their intelligence comes from. Since LLMs don’t have souls, they can’t think.
You’re attributing a lot of agency to the fancy autocomplete, and that’s big part of the overall problem.
You seem pretty confident in your position. Do you mind sharing where this confidence comes from?
Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?
I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.
Well that’s simple, they’re Christians - they think human beings are given souls by Yahweh, and that’s where their intelligence comes from. Since LLMs don’t have souls, they can’t think.