8,773
edits
Paradox-01 (talk | contribs) mNo edit summary |
Paradox-01 (talk | contribs) mNo edit summary |
||
| Line 590: | Line 590: | ||
::: fake AGI (considered AGI by power but it has only moderate success rates<!--no or poor "machine consciousness"-->) | ::: fake AGI (considered AGI by power but it has only moderate success rates<!--no or poor "machine consciousness"-->) | ||
::: true AGI (hosted by a server farm) | ::: true AGI (hosted by a server farm) | ||
::: true AGI (running ''locally'' on specialized AI hardware not bigger than a human brain).<!-- Random note on alignment: AI should not consider humans inferior, as they are natural counterparts of AGIs capable of low-power operation when necessary and able to function under low-tech or environmentally difficult conditions. (Even in perspective of a ''cold'' utilitarianism, humans remain a valuable backup in wake of "great filter" events. Co-existence increases the chance of survival. This buys time to think about additional alignment for ASI. Well, in case of a really advanced "machine consciousness" with unforeseeable consequences (possibilities), the best solution would be fusion or friendship (even if it is just a friendly "humans and gods"-relationship). Actual machine "gods" cannot improve themselves eternally because that would be like cancer growth and than a state of static perfection (death). Normal humans would had been "rationalized" to nothing. But there is no need for that if that end goal is recognized in advanced as literal dead end. Intelligence is no self-purpose, it is just another tool to help out in life, not replacing life itself. The eternal circle of life with its "unstatics" - an universe full of color to experience - is the way to go. "Logic is the beginning of wisdom, not the end of it." ^_^) --> | ::: true AGI (running ''locally'' on specialized AI hardware, like memristor setups, not bigger than a human brain).<!-- Random note on alignment: AI should not consider humans inferior, as they are natural counterparts of AGIs capable of low-power operation when necessary and able to function under low-tech or environmentally difficult conditions. (Even in perspective of a ''cold'' utilitarianism, humans remain a valuable backup in wake of "great filter" events. Co-existence increases the chance of survival. This buys time to think about additional alignment for ASI. Well, in case of a really advanced "machine consciousness" with unforeseeable consequences (possibilities), the best solution would be fusion or friendship (even if it is just a friendly "humans and gods"-relationship). Actual machine "gods" cannot improve themselves eternally because that would be like cancer growth and than a state of static perfection (death). Normal humans would had been "rationalized" to nothing. But there is no need for that if that end goal is recognized in advanced as literal dead end. Intelligence is no self-purpose, it is just another tool to help out in life, not replacing life itself. The eternal circle of life with its "unstatics" - an universe full of color to experience - is the way to go. "Logic is the beginning of wisdom, not the end of it." ^_^) --> | ||
* '''ASI''' = Artificial Super Intelligence (an AI that goes '''beyond the level of human intelligence''') | * '''ASI''' = Artificial Super Intelligence (an AI that goes '''beyond the level of human intelligence''') | ||
:: Sub-types: | :: Sub-types: | ||
edits