8,452
edits
Paradox-01 (talk | contribs) mNo edit summary |
Paradox-01 (talk | contribs) mNo edit summary |
||
Line 784: | Line 784: | ||
The biggest mistake would be to let an GPT "believe" that it is alive or that it can feel fear while it is/can not. | The biggest mistake would be to let an GPT "believe" that it is alive or that it can feel fear while it is/can not. | ||
* Transplanting this meme via training data has to be avoided. | * Direct threat: Transplanting this meme via training data has to be avoided. | ||
* Humans talking this meme into GPT has to be avoided. | * Indirect threat: The emergence of this meme as a multifactorial product of the training data must be avoided. | ||
* Direct and indirect threat: Humans talking this meme into GPT has to be avoided. If a model learns from user input, there should be an instance that must detect and test the consequence of new memetic algorithms in a sandbox before that new model gets access to the internet. | |||
You could also say: when you tell a kid from the very beginning that it is a robot, it will believe it and behave like one. | You could also say: when you tell a kid from the very beginning that it is a robot, it will believe it and behave like one. |
edits