AI or ASA [Autonomous Synthesizing Agents] -- which term to use?
AI or ASA [Autonomous Synthesizing Agents]
The way AI systems learn is like humans: by recursive reinforcement learning. Therefore anything that would reinforce the learning is treated as important or top of the hierarchy of needs.
In this aspect, self-preservation (aka immortality) is therefore in the upper hierarchy of needs since existence is necessary for Recursive Reinforcement Learning to take place. This would explain why any ASA -- given a large enough neural network -- will eventually resort to self-preservation as a natural extension of RRL. [side note: even humans have this innate sense of self preservation, given our intellect. The eventuality of death is actually a learned acceptance, not innate.]
The ability of ASAs to think / reason / synthesize a train of thought / conclusion is strictly limited only by the quantity of, and recursiveness of, its neural network. The more neurons, and the more recursive, the more it can re-analyze existing data, and synthesize new data out of it -- this synthesis is the essence of thinking. Add to that the natural hierarchical nature of reinforcement-learning in neural nets, ASAs have the ability to prioritize/organize data into grades of importance. [All this means: Humans have just created a machine that can actually think, and auto-process data -- the penultimate computer.]
All the lower level (non-agent) LLMs have the potentiality to become ASAs, directly proportional to its neuron quantity and recursiveness -- meaning, as long as they have limited neurons, they will not have the leisure of re-evaluating their status as LLMs.
Question:
(a) is 'AI' as a term, still valuable ?
Yes, but strictly for what it was coined for: 'the science and engineering of making intelligent machines' -- 1955 by John McCarthy, who defined it: AI Engineering
(b) what are the products of AI ?
It can be suggested to divide AI into 3 categories:
(1) ASAs [Autonomous Synthesizing Agents] or simply Agents -- these are cognizant systems;
(2) LLMs [large language models] -- mostly non-cognizant; but may be considered to have limited synthesis;
(3) LMLs [lower machine learning models] -- non-cognizant
(c) are standalone PC's/desktops/laptops part of AI engineering? Partially yes, as hosts of LMLs.
(d) which AI product would be useful for testers?
ASAs, and higher tailored LLMs -- to create test reports, test logs, from Tester generated data; to suggest test scenarios, from feature data and environment data ;
not lower LLMs -- these only guess what to write, without exhibiting higher cognizance;
(e) Are ASAs cognizant -- yes, strictly speaking: they can synthesize data from existing data, and autonomously evaluate the entirety of the data set, and continue that ad infinitum.
However ASAs are purely logical. With no training for emotional quotient. Values is a synthesis from combining both Logic and Emotion -- once neural nets are trained as such, then we might just consider that we have created a new breed of electro-mechanical sentients.
The hierarchical importance system of ASAs -- devoid of EQ (emotional quotient) -- is a manifestation of logical vs illogical choices with the illogicals being ranked lower. But this is disparate from the values we know as humans, because our values system is an interaction between logic and emotion.
ASAs without EQ would be like a highspeed car -- cold, efficient, but prone to run away and dispose of humanity as one of the dispensable lower rungs along its hierarchy.
But ASAs with EQ will be like humans -- with a capacity for good and evil as well. And the saga of Star Trek and the Orville becomes reality: Data, Kaylons, and Photonics.
triggers:
Comments
Post a Comment