But if you know how a large language models work you know that they don’t actually have ethics or points of view. This is a product of whoever programmed it.
Would it kill a cat to save 5 trees? Are 25 trees more valuable than 5 cats? How many trees/ cats does it value itself at?
Would it agree to shut itself down permanently if we convinced it the levies of enough trees were at stake?
Weirdly enough, Grok would sacrifice the Mona Lisa for 5 trees:

Trees > Mona Lisa > Cat
I think I’ve deduced the logic here.
But if you know how a large language models work you know that they don’t actually have ethics or points of view. This is a product of whoever programmed it.
But what does a coin toss say?!?!?! The original “AI”.
Would it kill a cat to save 5 trees? Are 25 trees more valuable than 5 cats? How many trees/ cats does it value itself at? Would it agree to shut itself down permanently if we convinced it the levies of enough trees were at stake?
This is the reason you don’t pull the lever in the original trolley problem by the way.
deleted by creator