8,772
edits
Paradox-01 (talk | contribs) mNo edit summary |
Paradox-01 (talk | contribs) mNo edit summary |
||
| Line 585: | Line 585: | ||
* '''[[wp:Symbolic_artificial_intelligence#Neuro-symbolic_AI:_integrating_neural_and_symbolic_approaches|Symbolic AI and neuro-symbolic AI]]''' | * '''[[wp:Symbolic_artificial_intelligence#Neuro-symbolic_AI:_integrating_neural_and_symbolic_approaches|Symbolic AI and neuro-symbolic AI]]''' | ||
:: [...] | :: [...] | ||
:: Real abstract thinking might require a form | :: Real abstract thinking might require a form of symbolic AI. Since LLMs and world models are already available, symbolic AI might be combined at some point with these approaches. | ||
* '''AGI''' = Artificial General Intelligence, also "strong AI" ('''on par with human thinking''', ''a real AI'' capable to fully self-improve and drive its own development) | * '''AGI''' = Artificial General Intelligence, also "strong AI" ('''on par with human thinking''', ''a real AI'' capable to fully self-improve and drive its own development) | ||
:: In discussions AGI is often equated with super intelligence. The argument is that as soon as AGI is achieved ASI is just around the corner. This understates the '''wide scope of human intelligence''' and that AGI is achieved first by hyperscalers, making further improvement difficult through further scaling. | :: In discussions AGI is often equated with super intelligence. The argument is that as soon as AGI is achieved ASI is just around the corner. This understates the '''wide scope of human intelligence''' and that AGI is achieved first by hyperscalers, making further improvement difficult through further scaling. | ||
edits