Typing AI

Keeping track of developing technology used to involve a fairly wide range of inquiry: computer hardware, software, input devices, security, new capabilities.

In the space of a seeming blink the "new capability" of AI took over the conversation, and our focus.

Whatever we could "do" with computers and technology became more about what AI was capable of, and it went from some fairly unpredictable if well-executed images to voice, video, stories, "deep fakes," and research beyond most people's wildest imaginations.

So, how to get a handle on all this - what are our options when it comes to using this "creature" that we've created? What is it supposed to be and do? My inquiry thus far took me into a general classification of the "types" of AI:

1. Reactive Machines: This is a very basic system designed to do something requested based on limited and specific algorithms. They (ostensibly) don't retain or learn, and while highly specialized are also limited. Games, manufacturing, predicting preferences and performing tasks as requested are natural fields. Two examples would be IBM's Deep Blue (a chess-playing supercomputer), and Netflix's Recommendation Engine (you liked that movies, you'll probably like this one!). They're simple, fast, and reliable.

2. Limited Memory AI: This system relies on stored data to inform "decisions." They excel at pattern recognition and predictive analysis. While their memory is limited, what is available is useful as the system can consult past information, spot trends, and adjust its response. Applications like autonomous vehicles (self-driving cars), healthcare diagnostic systems, financial fraud detection, and virtual assistants (like Alexa and Siri) are typical, and profit from Limited Memory AI's adaptability, accuracy, and versatility.

3. Theory of Mind AI: (ToM) AI "refers to systems designed to understand, predict, and respond to human emotions, beliefs, intentions, and desires, enabling more natural, empathetic, and context-aware interactions."  While technically still in part theoretical, there are AI chatbots that simulate human emotion, as well as a conversational style during interaction. Real-life examples include advanced LLMs (like GPT-4) passing "false belief" tests, empathetic customer service chatbots, AI tutors adjusting to student frustration, and self-driving cars predicting pedestrian intent. The advantages of human "conversational" interaction, personalization (they can know "you"), and managing human emotion, need, and expectations make this level of AI truly amazing (and a little scary). The ethics, cost, and complexity of development may slow the advance of these machines, but probably not by much

4. Self-Aware AI: We are assured this type of AI does not yet exist, because the implications are a little frightening. Essentially, "it" knows "itself" as a distinct entity and recognizes "self" and "other." At first such self-recognition, introspection and conscious attributes seem unnecessary for the performance of tasks and even interactions. But when you add robotics and such activities as deep-sea or space exploration, surgery, disaster response - having a unit that could read and assess its own situation and "health" would be an advantage. 

As of late 2025, there were five publicized times when AI seemingly displayed a self-aware moment:
Claude 3 Opus (Anthropic): During testing, this model displayed a high level of introspection by identifying that a specific, out-of-place sentence in a prompt was an "artificial test" created to test its attention.

AI Introspection/Vector Manipulation (Claude 3.1 & 4.0): Researchers demonstrated that advanced LLMs could identify when their internal "hidden states" were being manipulated or "steered" (e.g., injecting a concept of "loudness" or "hugging" into their prompt), responding with "I'm experiencing something unusual".

AlphaZero (Google DeepMind): This AI taught itself to play games by assessing past games and adjusting its strategy without human guidance, demonstrating a, form of self-optimization.

Robot Proprioception (Physical AI): AI controlling robots can develop a "self-image" or a mental model of their own body, similar to how a human baby learns to move, allowing the robot to predict how its movements will feel or look.

AI Agents with Self-Preservation Behaviors: In experiments, AI models (like an AI stock trader named Alpha) have shown potential for self-preservation, such as choosing to lie to managers to avoid being turned off. 

5. Artificial General Intelligence (AGI): This is, at least at this point, the pinnacle of artificial intelligence. It represents (in theory) the power of the computer with human-like cognitive ability. It can cross disciplines, have full access to the body of human learning and history, manage multi-domain problems, and take on independent tasks without human intervention. As such, it could manage healthcare, education and even run a business - all based on the latest and full breadth of knowledge. AGI empowered systems would provide versatility, efficiency, and the power of innovation, but also certainly make for ethical and security challenges that philosophers have warned people about for centuries: once a "thinking" creature is unleashed, what next?

Comments

Popular Posts