same way an LLM is able to produce coherent and convincing sentences by statistically determining what word is likely to follow another
To me this implies that the navigation AI is going to hallucinate parts of its model of the world, because it’s basing that model on what’s statically the most likely to be there as opposed to what’s actually there. What could go wrong?
To me this implies that the navigation AI is going to hallucinate parts of its model of the world, because it’s basing that model on what’s statically the most likely to be there as opposed to what’s actually there. What could go wrong?