Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
It’s super simple (if you want it to be)
https://www.jan.ai/
https://www.jan.ai/docs/desktop/quickstart
PS: You might like the thing I’m building too. The TL;DR premise is: what if you could make an LLM either tell the truth or lie loudly?
https://codeberg.org/BobbyLLM/llama-conductor