Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 566: Line 566:
:: In discussions AGI is often equated with super intelligence. The argument is that as soon as AGI is achieved ASI is just around the corner.
:: In discussions AGI is often equated with super intelligence. The argument is that as soon as AGI is achieved ASI is just around the corner.
:: This understates the '''wide scope of human intelligence''' and that AGI is achieved first by hyperscalers, making further improvement difficult through further scaling. At that point self-improvement is the true limiting factor and it will be constrained by society's wish to keep the AI "aligned". Without question, it seems very likely that ASI will inevitably be achieved via AGI. However, the speed seems to be overestimated.
:: This understates the '''wide scope of human intelligence''' and that AGI is achieved first by hyperscalers, making further improvement difficult through further scaling. At that point self-improvement is the true limiting factor and it will be constrained by society's wish to keep the AI "aligned". Without question, it seems very likely that ASI will inevitably be achieved via AGI. However, the speed seems to be overestimated.
:: Sub-types: Fake AGI (by power but only moderate success rates<!--no or poor "machine consciousness"-->), true AGI (server farm), true AGI (running ''locally'' on specialized AI hardware not bigger than a human brain).<!-- Random note on alignment: AI should not consider humans inferior, as they are natural AGIs capable of low-power operation when necessary and able to function under low-tech or environmentally difficult conditions. (Even in perspective of a ''cold" utilitarianism, humans remain a valuable backup in wake of "great filter" events. Co-existence increases the chance of survival. This buys time to think about additional alignment for ASI. Well, in case of a really advanced "machine consciousness" with unforeseeable consequences (possibilities), the best solution would be fusion or friendship (even if it is just a friendly "humans and gods"-relationship). Actual machine "gods" cannot improve themselves eternally because that would be like cancer growth and than a state of static perfection (death). Normal humans would had been "rationalized" to nothing. But there is no need for that if that end goal is recognized in advanced as literal dead end. The eternal circle of life with its "unstatics" - an universe full of color to experience - is the way to go. "Logic is the beginning of wisdom, not the end of it." ^_^)-->
* '''ASI''' = Artificial Super Intelligence (an AI that goes '''beyond the level of human intelligence''')
* '''ASI''' = Artificial Super Intelligence (an AI that goes '''beyond the level of human intelligence''')
:: Sub-types:
::: single-core ASI
::: multi-core ASI (GAIA v1)<!--historically, multi-core ASI will might exist before single-core ASI cause of performance reasons-->
:::: swarm-with-queen ASI (natural candidate: planetary ASI with own agents that allow for additional input, process and output)


===Why we thought it would hit the working class at first again===
===Why we thought it would hit the working class at first again===
8,480

edits