A New Vision for Artificial Intelligence

Nicholas Mitsakos
10 min readNov 5, 2022

--

A New Vision for Artificial Intelligence

Artificial intelligence has received a lot of attention and hype for what it can do, but perhaps more attention is given to the fear of what it could do. There is a different way to think about artificial intelligence and its relevance as a powerful tool and a way to create efficiency and value.

Too much is expected of current models, and these models cannot scale effectively. There is a better approach that can enable more useful applications and scalability for this technology.

A new vision for artificial intelligence is using smaller more relevant data sets for dynamic learning generating more effective outcomes and better predictions.

This model uses cognitive architecture, learns, transfers learning, and retains knowledge — enabling more valuable and compelling artificial intelligence applications.

This approach is more closely related to the brain’s actual structures and much more effective than “neural networks,” which is a catchy name but the similarity to the brain’s actual functioning is in name only. Real advancement in artificial intelligence must live in reality, not theoretical marketing.

The current state of artificial intelligence shows the shortcomings of big data and trial-and-error approaches. A new AI vision can be a more effective solution. Smaller data sets, more relevant information, dynamic data, and algorithms will lead to more appropriate outcomes, better tools, and more effective applications.

Big Data Doesn’t Win

There is a common misconception that with big data comes better information and therefore better solutions. This is simply not the case. As we have scaled the quantity of data, we understand the limitations of aggregating data to try to develop useful output. Essentially, common sense is showing us that we are not getting useful conclusions from large pockets of big data.

The solution is smaller and more useful data sets that can follow a system like human cognitive architecture. We understand more about how the brain works and how it is it able to assimilate, store, and create new knowledge — fundamentally the goal of artificial intelligence. The brain can do that with much less energy and in fact, the models that we currently use require so much power and energy that they simply cannot scale. We are looking to develop better and more effective AI solutions.

Big Data and Common Sense

One of the challenges is that it is misleading to call “neural networks” a system like the neural networks in organic brain functioning. Artificial intelligence is not a re-creation of a biological neural network. There is an understanding that cognitive structure works by processing a series of layers of information and this has been re-created in most artificial intelligence languages. Essentially, it is layered thinking re-created as a series of equations where layers build onto each other giving a solution that generates a new equation that layers onto another, etc.

This is a better structure but that doesn’t mean better is good enough. There are too many shortcomings to this approach.

Artificial General Intelligence Isn’t

One misconception is that artificial intelligence will be everywhere, and it has been referred to as “the new electricity.” This is a misconception because electricity is a constant electromagnetic wave. It doesn’t vary in terms of the function it performs. It provides energy and that energy is developed into many different applications, but the ubiquity of electricity does not suddenly create a ubiquity of applications.

Artificial intelligence is the opposite of this. It is customized and specific to applications and can create great efficiencies and enhanced capability, but it must essentially be customized. To be the most effective in its function, there needs to be specific data with deep correlations and a phenomenal understanding of the most effective outcomes. This is not general, and it is certainly not intelligence that can be applied broadly.

AI needs to be dynamic, learn, and understand relevant data, re-learn and then apply that new capability. It is the opposite of something like electricity which, while ubiquitous, is not dynamic nor does it need to be modified for a specific application. It is the dynamic nature of artificial intelligence, the constant learning, refining, and understanding of new relevant data, that makes the true distinction for usable AI.

What Matters

Detail, or lack of it, matters. Current artificial intelligence models try to look at images and re-create those images pixel by pixel predicting what will change. This not only cannot scale but it is also a waste of processing power and ultimately profoundly inefficient. It does not apply to real life where common sense and our subconscious mind teach us to ignore things that simply do not matter. This needs to be a standard for artificial intelligence models, as well.

A simple example is trying to catch a ball. The mathematics is quite complex in understanding the physical forces of gravity, acceleration, momentum, and mass — all amounting to a very challenging calculation that the subconscious brain does quite simply. One of the reasons is it does this simply by ignoring the nonrelevant information. So many things are going on in the background, but these other details and specifics don’t apply to trying to catch a ball.

In large-scale artificial intelligence models, this process of focusing on essential details is lacking. Instead of a dynamic model that determines relevant data, all the data is processed and considered and then the next steps are predicted. It’s clear to see from this example that a model such as this will never catch a ball — or be responsive enough in a dynamic driving environment to be a fully self-driving autonomous vehicle.

These kinds of models are attractive in concept but cannot scale effectively in the real world. Data must be relevant and monitored dynamically.

The Solution

Current research shows that we learn from simulation and analogy, develop an “intelligence” about the situation of the process, and can then apply that learning to new situations. A simple example would be something like washing your hands. Even though it seems obvious, in a new environment with a sink and a faucet you’ve never seen before, we can easily wash our hands because we have learned by simulation and analogy. This is the essence of how humans learn and the essence of how AI should be developed.

Smaller data sets, more relevant information and a simulation and analogy to real-world applicability can create more useful and powerful “intelligence.” Learning by analogy enables us to take a set of knowledge and apply it to new situations and function effectively.

Intelligence is the ability to predict the future — knowing what to do in a situation as it unfolds based on a simulation and analogy that has been learned. This is the essence of what a new model for artificial intelligence hopes to accomplish. It should predict a more effective outcome based on learning by simulation and analogy.

Another failure of big data artificial intelligence models is that they don’t understand how intelligence really functions. We have incomplete information, and we connect dots based on referential data, creating an understanding of possible outcomes, and using our analogous thought process to build solutions from incomplete data sets. This is the power of human intellect and what a neural network accomplishes organically. It is the aspirational model for artificial intelligence to do the same.

Prediction equals intelligence.

The ability to predict means understanding relevant data from the universal set available. It is not constant trial and error, as most models now are structured. AI can never be universally applicable if it is a constant state of trial and error from larger and larger data sets. What is effective is holding those data sets, understanding relevance, dynamically modifying those data sets based on new and better inputs, and using relevant data to make useful predictions.

Cognitive Architecture

Unlike neural network models, the brain works in a more fascinating cognitive architecture. There are structures that collect a massive amount of data and then can immediately process that data into different layers of relevance. Much like AI software which looks to layer various equations, the brain does this naturally in its architecture. It collects a massive amount of data but can immediately process that data into relevant data sets.

The brain naturally calculates the relevance of data, processes it, and applies those algorithms to enable useful output. The most fascinating thing about this organic process is that all this data is dynamically analyzed locally. This is the distinction between technical networks and cognitive architecture. It is this local processing that is the distinctive capability to enable efficient relevant processing quickly delivering impactful results. The brain’s cognitive architecture is not mimicked by artificial intelligence structures and networks yet.

Beautiful and Dynamic

Cognitively our brain receives signals, understands the action potential, prioritizes those potential actions, processes relevant data, and then predicts the most appropriate action. It is a beautiful dynamic system, efficient and magnificent.

AI systems can learn how to layer, interconnect, and learn more effectively. This is the goal of new AI systems and the vision for more appropriate outcomes, better tools, and more effective applications.

Also, the brain’s cognitive architecture enables it to gain information, transfer that information, and retain that knowledge. This makes not only current processing more effective, but future processing builds upon exponential learning.

Existing AI systems do not do this. Information is forgotten and not used as a foundation for more effective knowledge in different applications. Current AI systems keep knowledge locally with a specific prediction in mind but do not use analogous learning for other applications.

Chess versus Go

An example of this is when Google’s Deep Mind was learning chess. It went through the process to learn how to play and process potential moves, etc., and became very skilled and adept at chess. But then, Deep Mind wanted to learn how to play the game Go. It did not take any of the learning from its experience at becoming a chess player to help it learn to be more effective playing Go. While the games are different, and the argument is that these are different applications and learning must begin over again.

But this is not how the brain works. Cognitive architecture finds analogies and relevance within all learning experiences that apply to new situations and learning is exponentially more effective.

It is the same reason why we learn mathematics in school. While the counterargument has always been that “I am never going to use differential equations ever in the real world” that is beside the point. You are learning problem-solving, analogous skills, and referential data that can help you solve many problems more effectively.

Human learning is an exponential foundation that enables us to gain, transfer, and retain knowledge in many situations and build new solutions for new circumstances. This is real learning and something that AI systems currently cannot accomplish.

Megawatts versus Watts

The brain is a highly efficient system. It processes data locally, choosing the most effective and relevant options, and delivering these tools and applications with astounding speed. AI systems currently cannot do this. They are massive energy users processing too much data and managing too many irrelevant factors.

This approach can work in a well-defined problem, like playing chess or Go. But it is hardly applicable in a dynamic world where data has different relevancy and applicability. Big data with massive processing for trial and error is inefficient. While it may work playing chess, bigger is not better in the real world.

Decisions need to be quick, dynamic, and well-defined. But, often, these are predictions for actions that have not previously occurred and there is no static model of data to draw from. Chess may be complicated, but all the moves are known. A self-driving car in a dynamic neighborhood environment has not seen that specific circumstance before and the relevant amount of data is not easily understood in real-time.

These are potentially life-threatening events and depending on computer processing for accuracy can be misleading. This does not reflect what’s going on because data sets need to be narrow and well-defined with the most useful and impactful decisions developed quickly and accurately.

Big data is not the story. Large data sets do not reflect reality. Quality matters along with relevance and dynamic learning to be specifically applicable.

Language

Human language development is miraculous and most likely the single thing that separates us from all other creatures. But language is not a data problem. Perhaps the truly miraculous nature of language is that from incomplete information, and sometimes very little interconnection, we can learn a language very effectively. This emphasizes the point, and using language as the example, it is relevant data, referential experiences, analogies, and categorizations that create exponential learning and enable the development of language skills. As has been said, Mandarin is not a difficult language because any 5-year-old can learn it. That’s astounding.

This is a profound lesson for AI systems. Language is the single most effective analogy when it comes to learning from incomplete data, understanding what’s relevant, projecting and connecting dots, and using that data foundation as referential knowledge to predict.

This is the new vision of AI. Looking at incomplete data, narrowing that data set, understanding relevancy, learning, creating new data sets, connecting dots, and learning exponentially.

AI is not a data problem; it is a cognitive architecture problem.

A Better AI Solution

As we have seen, large data models really can’t learn, they can’t transfer knowledge or understanding, they don’t understand relevance, cannot use analogous learning to transfer that relevance, and are essentially bad at predicting.

Current AI models require massive and increasing data and essentially learn from reinforcement. This cannot scale and is massively inefficient.

For AI to be generally available and deployed effectively it cannot be an extension of current models that require trial and error, massive data, and continued reinforcement because real practicality and predictability are questionable at best. It can perform better at well-defined tasks, but that does not scale.

A better solution is real learning based on cognitive architecture, focused dynamic data, and referential data sets. This is closer to real human learning, more effective and efficient, and offers a significantly better solution.

Understanding the natural learning process — referential and analogous data, categorization, transferring and building upon that data, and creating knowledge applicable to new situations — learning builds upon itself and is exponentially effective.

That is the real AI solution.

--

--