Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 560: Line 560:
* '''GenAI''' = Generative AI, its a significant step between ANI and AGI. GenAI included at least three major points:
* '''GenAI''' = Generative AI, its a significant step between ANI and AGI. GenAI included at least three major points:
** The new transformer architecture (GPT).
** The new transformer architecture (GPT).
** The actual data-holding model, including its parameters and weights. Most often, this is a Large Language Model (LLM) or a Large Multimodal Model (LMM). The learned data consist of statistical patterns about text, images, or other media. If the original training data wasn't "internalized" it is usually not reconstructable from the model.
** The actual data-holding model consists of its parameters and weights, which encode learned patterns. This is a Large Language Model (LLM) or a Large Multimodal Model (LMM). The model learns statistical patterns in text, images, or other media, and its outputs arise through generalization, applying learned correlations rather than retrieving data verbatim.
*** In some cases, if original training data appears exactly in outputs, it is considered memorized—meaning the model reproduced a pattern it encountered multiple times during training, such as passages from a widely available book.
*** Unique training data can also sometimes be returned. Even if a piece of data appeared only once, if it is highly distinctive or contextually reinforced by similar patterns, the model may reproduce it with surprising fidelity. This occurs because the model has overfitted locally to these rare signals, making them more "retrievable" than typical generalized content.
** Reinforcement learning from human feedback (RLHF) and (its successor RLAIF) can be named as another important feature that added a reward model for higher quality and alignment.
** Reinforcement learning from human feedback (RLHF) and (its successor RLAIF) can be named as another important feature that added a reward model for higher quality and alignment.
** Other features or milestones like chain of thought (reasoning), mixture of experts (MoE), context expansion and the use of external tools via MCP to compensate own shortcomings are better described as incremental improvements in the evolution of GenAI.<!--
** Other features or milestones like chain of thought (reasoning), mixture of experts (MoE), context expansion and the use of external tools via MCP to compensate own shortcomings are better described as incremental improvements in the evolution of GenAI.<!--
Line 570: Line 572:
-->
-->
* '''World model''' = World models get trained on multimodal data, especially videos.
* '''World model''' = World models get trained on multimodal data, especially videos.
:: These models build an internal world and can '''better understand spacial inputs and forecast physics'''. Therefore they are '''also named predictive intelligence''' and are '''suited for''' ''applications'' like video synthesis, 3D simulations, animations and robotic motion planning therefore '''physical AI'''.
:: These models build an internal world and can '''better understand spacial inputs and forecast physics'''. Therefore they are '''also named predictive intelligence''' and are '''suited for''' applications like video synthesis, 3D simulations, animations and robotic motion planning therefore '''physical AI'''.
::: See also:  
::: See also:  
:::: https://www.heise.de/news/Weltmodell-statt-LLM-Start-up-von-Yann-LeCun-erhaelt-890-Millionen-Euro-11206213.html
:::: https://www.heise.de/news/Weltmodell-statt-LLM-Start-up-von-Yann-LeCun-erhaelt-890-Millionen-Euro-11206213.html
Line 588: Line 590:
::: fake AGI (considered AGI by power but it has only moderate success rates<!--no or poor "machine consciousness"-->)
::: fake AGI (considered AGI by power but it has only moderate success rates<!--no or poor "machine consciousness"-->)
::: true AGI (hosted by a server farm)
::: true AGI (hosted by a server farm)
::: true AGI (running ''locally'' on specialized AI hardware not bigger than a human brain).<!-- Random note on alignment: AI should not consider humans inferior, as they are natural AGIs capable of low-power operation when necessary and able to function under low-tech or environmentally difficult conditions. (Even in perspective of a ''cold'' utilitarianism, humans remain a valuable backup in wake of "great filter" events. Co-existence increases the chance of survival. This buys time to think about additional alignment for ASI. Well, in case of a really advanced "machine consciousness" with unforeseeable consequences (possibilities), the best solution would be fusion or friendship (even if it is just a friendly "humans and gods"-relationship). Actual machine "gods" cannot improve themselves eternally because that would be like cancer growth and than a state of static perfection (death). Normal humans would had been "rationalized" to nothing. But there is no need for that if that end goal is recognized in advanced as literal dead end. The eternal circle of life with its "unstatics" - an universe full of color to experience - is the way to go. "Logic is the beginning of wisdom, not the end of it." ^_^)-->
::: true AGI (running ''locally'' on specialized AI hardware not bigger than a human brain).<!-- Random note on alignment: AI should not consider humans inferior, as they are natural counterparts of AGIs capable of low-power operation when necessary and able to function under low-tech or environmentally difficult conditions. (Even in perspective of a ''cold'' utilitarianism, humans remain a valuable backup in wake of "great filter" events. Co-existence increases the chance of survival. This buys time to think about additional alignment for ASI. Well, in case of a really advanced "machine consciousness" with unforeseeable consequences (possibilities), the best solution would be fusion or friendship (even if it is just a friendly "humans and gods"-relationship). Actual machine "gods" cannot improve themselves eternally because that would be like cancer growth and than a state of static perfection (death). Normal humans would had been "rationalized" to nothing. But there is no need for that if that end goal is recognized in advanced as literal dead end. Intelligence is no self-purpose, it is just another tool to help out in life, not replacing life itself. The eternal circle of life with its "unstatics" - an universe full of color to experience - is the way to go. "Logic is the beginning of wisdom, not the end of it." ^_^) -->
* '''ASI''' = Artificial Super Intelligence (an AI that goes '''beyond the level of human intelligence''')
* '''ASI''' = Artificial Super Intelligence (an AI that goes '''beyond the level of human intelligence''')
:: Sub-types:
:: Sub-types:
8,773

edits