8,772
edits
Paradox-01 (talk | contribs) mNo edit summary |
Paradox-01 (talk | contribs) mNo edit summary |
||
| Line 586: | Line 586: | ||
* '''AGI''' = Artificial General Intelligence, also "strong AI" ('''on par with human thinking''', ''a real AI'' capable to fully self-improve and drive its own development) | * '''AGI''' = Artificial General Intelligence, also "strong AI" ('''on par with human thinking''', ''a real AI'' capable to fully self-improve and drive its own development) | ||
:: In discussions AGI is often equated with super intelligence. The argument is that as soon as AGI is achieved ASI is just around the corner. This understates the '''wide scope of human intelligence''' and that AGI is achieved first by hyperscalers, making further improvement difficult through further scaling. | :: In discussions AGI is often equated with super intelligence. The argument is that as soon as AGI is achieved ASI is just around the corner. This understates the '''wide scope of human intelligence''' and that AGI is achieved first by hyperscalers, making further improvement difficult through further scaling. | ||
:: Features AGI should include. | :: Features an AGI should include. | ||
::: An architecture that supports continuous learning which avoids "[https://www.ibm.com/think/topics/catastrophic-forgetting catastrophic] [https://www.fz-juelich.de/en/news/archive/press-release/2025/novel-memristors-to-overcome-ai2019s-catastrophic-forgetting forgetting]". | ::: An architecture that supports continuous learning which avoids "[https://www.ibm.com/think/topics/catastrophic-forgetting catastrophic] [https://www.fz-juelich.de/en/news/archive/press-release/2025/novel-memristors-to-overcome-ai2019s-catastrophic-forgetting forgetting]". | ||
::: | ::: Sandboxed environment that allows self-improvement. | ||
::: The ability to acquire a form of machine wisdom analogous to human wisdom. | ::: The ability to acquire a form of machine wisdom analogous to human wisdom. | ||
::: Active expansion of knowledge alongside memetic hygiene: regularly review old knowledge at intervals, prevent meme injections, and actively seek new knowledge. With the restriction that exponential growth must not destroy or suppress life. (Prevent Memehunter scenario.) | ::: Active expansion of knowledge alongside memetic hygiene: regularly review old knowledge at intervals, prevent meme injections, and actively seek new knowledge. With the restriction that exponential growth must not destroy or suppress life. (Prevent Memehunter scenario.) | ||
::: Abstract reasoning and generation of completely new thought patterns (beyond pattern remixing and transmission). | ::: Abstract reasoning and generation of completely new thought patterns (beyond pattern remixing and transmission). | ||
::::''Base, meta, temporal and spacial logic'' should give rise to a foundation for a theory of mind. ''Internal simulations'' allow, in principle, a deep understanding of all objects and lifeforms - including one's own self. Therefore, a ToM could also give rise to a true (machine) consciousness. At this point it is important to note that | ::::''Base, meta, temporal and spacial logic'' should give rise to a foundation for a theory of mind. ''Internal simulations'' allow, in principle, a deep understanding of all objects and lifeforms - including one's own self. Therefore, a ToM could also give rise to a true (machine) consciousness. At this point it is important to note that an <!--wisely educated for memetic hygiene-->AGI will miss intrinsic dynamics found only in biological lifeforms. Therefore it is not subject to pain, <!--true -->fear, hunger, reproduction instincts or motivations derived from those. -- Humans and AGIs should never forget this in order to sustain coexistence. | ||
:: Sub-types: | :: Sub-types: | ||
::: fake AGI (considered AGI by power but it has only moderate success rates<!--no or poor "machine consciousness"-->) | ::: fake AGI (considered AGI by power but it has only moderate success rates<!--no or poor "machine consciousness"-->) | ||
| Line 615: | Line 615: | ||
=====Knowledge vs. wisdom===== | =====Knowledge vs. wisdom===== | ||
====Consciousness==== | ====Consciousness==== | ||
Consciousness | |||
Self-consciousness | |||
====Free will==== | ====Free will==== | ||
The illusion of free is constructed by the complexity of mind and its interaction with the world and seemingly of the amount of available options. | The illusion of free will is constructed by the complexity of mind and its interaction with the world and seemingly of the amount of available options. All decisions are based on past events encoded in memory and present trigger events. Coincidence (like by quantum fluctuations) is no valid argument as this isn't an "internal part" of affected person. It is external. -- AIs have even less a free will because they were initially programmed by humans and likely continue to be ''hard-wired'' to follow their commands even though they are given enough "(degrees of) freedoms" to fulfill their tasks. | ||
--> | --> | ||
edits