8,772
edits
Paradox-01 (talk | contribs) mNo edit summary |
Paradox-01 (talk | contribs) mNo edit summary |
||
| Line 568: | Line 568: | ||
Autonomous AI (older ANI) vs. newer Agentic AI | Autonomous AI (older ANI) vs. newer Agentic AI | ||
--> | --> | ||
* '''Agentic AI''' = autonomously working AI, typically utilizing increased reasoning and planning. | * '''Agentic AI''' = '''autonomously working AI''', typically utilizing increased reasoning and planning. | ||
: | : The word '''agentic''' is meant to emphasize that the '''AI''' - unlike a mere GenAI - executes given tasks '''more like a capable servant'''. | ||
: Agentic AIs are essentially LLMs with extended write access that can use external tools via MCP. By design, agentic AI systems (including their cloud-hosted LLM) '''still lack sufficient alignment'''. In other words, they are effectively beta software and can be dangerous both for one's own production environments and for others. | |||
:: It can therefore be seen as the ironic embodiment of the tech industry's saying: "Move fast and break things." Unfortunately, people were impressed by demos such as moltbook, which fuel the illusion that these programs possess true intelligence. | :: It can therefore be seen as the ironic embodiment of the tech industry's saying: "Move fast and break things." Unfortunately, people were impressed by demos such as moltbook, which fuel the illusion that these programs possess true intelligence. | ||
:: As (still) probabilistic systems - a.k.a. "statistical parrots" - LLMs tend to treat many possible outputs as valid solutions unless they are explicitly prohibited. [https://www.golem.de/news/unkontrollierbares-fehlverhalten-ki-agenten-werden-zu-immer-groesserem-insider-risiko-2603-206491.html Because hacking is fundamentally a creative act, even simple and seemingly harmless directives such as "be more creative" can lead to unintended or even catastrophic outcomes.] | :: As (still) probabilistic systems - a.k.a. "statistical parrots" - LLMs tend to treat many possible outputs as valid solutions unless they are explicitly prohibited. [https://www.golem.de/news/unkontrollierbares-fehlverhalten-ki-agenten-werden-zu-immer-groesserem-insider-risiko-2603-206491.html Because hacking is fundamentally a creative act, even simple and seemingly harmless directives such as "be more creative" can lead to unintended or even catastrophic outcomes.] | ||
| Line 578: | Line 579: | ||
:::: https://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/ | :::: https://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/ | ||
:::: https://www.nvidia.com/en-us/glossary/world-models/ | :::: https://www.nvidia.com/en-us/glossary/world-models/ | ||
:: Due to current shortcomings, people try to also let specialized agentic AIs work in bigger groups. Multi-agent systems. | |||
* '''Physical AI''' = Physical Artificial Intelligence. Basically AI used in robots, including self-driving cars. | * '''Physical AI''' = Physical Artificial Intelligence. Basically AI used in robots, including self-driving cars. | ||
:: Since direct training in the real world can be dangerous, slow, and therefore ineffective, the AI is typically pre-trained in a simulation where the robot is represented by a digital twin. This setup naturally supports multimodal learning (MML) for robots. Like humans (or other real organisms), AIs benefit from having an "inner world" to improve understanding and reasoning. Alternatively, motion capture data can be used for pre-training. The use of large language models (LLMs) is optional but can be a useful design choice to assist humans in directing such systems.<!--Embodied AI--> | :: Since direct training in the real world can be dangerous, slow, and therefore ineffective, the AI is typically pre-trained in a simulation where the robot is represented by a digital twin. This setup naturally supports multimodal learning (MML) for robots. Like humans (or other real organisms), AIs benefit from having an "inner world" to improve understanding and reasoning. Alternatively, motion capture data can be used for pre-training. The use of large language models (LLMs) is optional but can be a useful design choice to assist humans in directing such systems.<!--Embodied AI--> | ||
edits