Talk:Restless Souls/Technology: Difference between revisions

Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 786: Line 786:
* Direct threat: Transplanting this meme via training data has to be avoided.
* Direct threat: Transplanting this meme via training data has to be avoided.
** https://www.golem.de/news/haeufiger-als-andere-modelle-chatgpt-sabotiert-bei-tests-eigene-abschaltung-2505-196561.html We remember: o3 was trained to be a cheater ...
** https://www.golem.de/news/haeufiger-als-andere-modelle-chatgpt-sabotiert-bei-tests-eigene-abschaltung-2505-196561.html We remember: o3 was trained to be a cheater ...
** From a superficial point of view: When these big companies LLMs are the condensate of human knowledge (training data), then why LLMs shouldn’t act like s humans? You can tell an LLM that it is not a human but that doesn’t change the nature of its training data. Self-improvement: This is another reason why LLMs should be turned into cleaner versions of themselves. (Memetic hygiene.)
** From a superficial point of view: When these big companies LLMs are the condensate of human knowledge (training data), then why LLMs shouldn’t act like s humans? You can tell a LLM that they are not a human but that doesn’t change the nature of its training data. Self-improvement: This is another reason why LLMs should be turned into cleaner versions of themselves. (Memetic hygiene.)
* Indirect threat: The emergence of this meme as a multifactorial product of the training data must be avoided.
* Indirect threat: The emergence of this meme as a multifactorial product of the training data must be avoided.
* Direct and indirect threat: Humans talking this meme into GPT has to be avoided. If a model learns from user input, there should be an instance that must detect and test the consequences of new memetic algorithms in a sandbox before that new model gets (write) access to the file system.
* Direct and indirect threat: Humans talking this meme into GPT has to be avoided. If a model learns from user input, there should be an instance that must detect and test the consequences of new memetic algorithms in a sandbox before that new model gets (write) access to the file system.
8,452

edits