You can’t fix societal problems with technical solutions.
Even if you could, you’d need a functioning technical solution first, and not a machine that’s just built to superficially pretend it’s such a solution.
Factiverse’s success rate is around 80%
That seems pretty close to useless even if we assume that the criteria they’ve used to define success are spot on. Funny how they don’t mention it until the second to last paragraph.
Fight bullshit with a bullshit generator. What could possibly go wrong?
Even more bullshit.
Smells like scam
That’s great and all, but it assumes that people actually WANT true and correct information instead of sound bites that confirm their biases and align with their tribalistic mindset, which is quite often simply not the case…
As of today, Factiverse says it outperforms GPT-4, Mistral 7-b, and GPT-3 in its ability to identify fact-check worthy claims in 114 languages.
what about newer models? These are super old.