8,133
edits
Paradox-01 (talk | contribs) mNo edit summary |
Paradox-01 (talk | contribs) mNo edit summary |
||
Line 653: | Line 653: | ||
===The sci-fi singularity will not happen=== | ===The sci-fi singularity will not happen=== | ||
* GPTs (LLMs) are not AIs. Their functionality is based too much on statistics | * GPTs (LLMs) are not AIs. '''Their functionality is based too much on statistics and too less on actual learning of discrete information.''' The statistical approach is great for machine learning since the pattern recognition looks for similar things (the patterns) in a set of many examples (training data). The final result (the trained LLM) is composed of ''averaged information'' and statistic-based logic while the weights determine how things will be connected (computed). As consequence these LLM sometimes hallucinate: They obtained some kind of machine logic but lost the ability to always get ''un-averaged'' (not altered) information like it is stored in normal databases. (Due to the averaging and the weights the stored '''information is generally not verbatim accessible anymore'''. Researchers need much effort to reconstruct information in LLM which were once training data.) This explains also how LLM became good at math. Random texts and images from the web contain many '''subjective truths'''. On the other hand, languages (as a concept) and especially math and programming languages - their rules pose '''objective truths''' - are much easier to learn via statistics. When it comes to generation of images and music we often greatly forgive errors because the output was good enough, many variants can pose a "valid" solution and we couldn't create it better ourselves. Invalid solutions - like "mutations" - can be addressed in negative prompts.<!-- | ||
The human memory also learns with statistics: When things correlate it will memorize a rule. However, individual information get better compressed. In order to control body functions and also store a reconstruction of a world in one tiny skull different memory systems are involved. It seems a memory is encoded in the short-term memory and faintly encoded in long-term memory at the same time. During sleep the short term memories gets erased while their counterparts in the long-term memory get strengthened. So much roughly about efficiency. While the short-term memory doesn't seems to store logic, the long-term memory does alongside discrete memories. [...] | |||
Today's AIs lack such a more sophisticated (sub-divided) memory system.--> | |||
* '''An actual AI should be able to learn [[wp:Reinforcement_learning_from_human_feedback|without human help]] and adapt to a given problem.''' For this point in time, GPTs have not shown to possess ''true'' (universal) logic yet. (The capability to do all math with preinstalled or accessible "plugins" wouldn't be a good hint. Also, we shouldn't expect/demand from an AI to do mental calculation as its construction was inspired by biological neural networks. But an AI should be able to create on the fly their own software, compile and use it where necessary.) Without the possibility to universally correct itself, the models are doomed to be replaced (or to be extended at least) by other approaches. | |||
* Software cannot be optimized infinitely. The more optimized a system is, the slower further optimization gets. | * Software cannot be optimized infinitely. The more optimized a system is, the slower further optimization gets. | ||
* Moore's law ended. And serious quantum | * Moore's law ended. And serious photonic / quantum computing and spintronics are decades away. | ||
* Design iteration, production and implementation / installation of software and hardware are limited by the laws of reality so that the overall technological progress and ''mightiness'' of AI is also limited to a comprehensible pace and amount. | * Design iteration, production and implementation / installation of software and hardware are limited by the laws of reality so that the overall technological progress and ''mightiness'' of AI is also limited to a comprehensible pace and amount.<!-- | ||
Universal logic is based on inner world model which can be used for abstraction and plausibility checks. (COT could become a building stone for the later one.)--> | |||
A hard (fast) [[wp:Technological singularity#Hard or soft takeoff|take-off]] cannot happen. Therefore, an AI cannot improve itself fast enough to become uncontrollable. | A hard (fast) [[wp:Technological singularity#Hard or soft takeoff|take-off]] cannot happen. Therefore, an AI cannot improve itself fast enough to become uncontrollable. |
edits