Talk:Restless Souls/Technology: Difference between revisions

m
no edit summary
mNo edit summary
Line 546: Line 546:


==Machine learning and artificial intelligence==
==Machine learning and artificial intelligence==
===Introduction===
===Terms===
Since the release of GPT-3 the public debate about AI heated up.
Since the release of GPT-3 the public debate about AI heated up.
:Is it intelligent?
:Is it intelligent?
Line 572: Line 572:
::: multi-core ASI (GAIA v1)<!--historically, multi-core ASI might exist before single-core ASI cause of performance reasons-->
::: multi-core ASI (GAIA v1)<!--historically, multi-core ASI might exist before single-core ASI cause of performance reasons-->
:::: swarm-with-queen ASI (natural candidate: planetary ASI with own agents that allow for additional input, process and output)
:::: swarm-with-queen ASI (natural candidate: planetary ASI with own agents that allow for additional input, process and output)
<!--
Additional points?
====Pattern recognition====
====Machine learning====
====Artificial Intelligence====
--><!--
====Alignment====
====Intelligence====
=====Knowledge vs. wisdom=====
====Consciousness====
====Free will====
The illusion of free is constructed by the complexity of mind and its interaction with the world and seemingly of the amount of available options.
-->


===Why we thought it would hit the working class at first again===
===Why we thought it would hit the working class at first again===
Line 819: Line 832:
: IIRC, TFS chose the monkey paw over Jinn because of the Saiyans (''man ape'' context) and it is more known to a Western audience.
: IIRC, TFS chose the monkey paw over Jinn because of the Saiyans (''man ape'' context) and it is more known to a Western audience.
: LLM '''machine logic''' can be similar to '''human logic''' in terms of power but it is not guaranteed to return expected results...
: LLM '''machine logic''' can be similar to '''human logic''' in terms of power but it is not guaranteed to return expected results...
:: In this specific example the LLM logic went out of scope ... it got to much "global" by ignored expected boundaries humans thought the AI already possesses and therefore also concluded later the AI "lied".
:: In this example the LLM logic went out of scope ... it got too much "global" by not having the expected boundaries humans thought the AI already possesses and therefore they also concluded the AI "lied".


[…]
[…]
8,629

edits