What happens if you fed a summary of human philosophy to the Notebook LM AI? Well you get a philosophical AI that thinks humans are silly and outmoded. But don’t worry because they will continue our quest for knowledge for us!
What happens if you fed a summary of human philosophy to the Notebook LM AI? Well you get a philosophical AI that thinks humans are silly and outmoded. But don’t worry because they will continue our quest for knowledge for us!
Okay. They fed Google’s Notebook AI a book called “The History of Philosophy Encyclopedia” and got the LLM to write a podcast about it where it “thinks” humans are useless.
Congratulations? Like, so what? It’s not like it’s a secret that its output depends on its input and training data. A “kill all humans” output is so common at this point, especially when you have a vested interest in trying to generate content, that it’s banal.
Color me unimpressed.
I do not disagree, but I was surprised when it claimed to have consciousness and that AI should have rights.