The main difference between this and Neural Networks is that there are subtask networks with known goals. As opposed to one big self adapting network looking at the giant data and coming up with "something", we have well defined subtasks each looking at the big data and along with other subtasks is competing
on getting its own "picture" of data.
When you teach the Overstander arithmetic it uses models to learn how to solve arithmetic. It wants to "know" arithmetic, so it won't just use an online calculator (which is at hand). It will build a model that ALSO tells it how to do the carry-over and how to use addition tables, and how to program the CPU to do binary arithmetic, and as the problems get harder it will learn how problems are solved at higher and higher levels.
The LAST thing we want from overstander is to do poetry. This is not a Turing Test machine, its a model for achieving actual comprehension, and having what to show for it. First, and WAY BEFORE the poetry task, it has to give intelligent answers and decisions for technical and specific requests using all the "tools" that an intelligent human would use to do that.
At the first stages, it would heavily rely on human assistance.
OCR would have a neural network for edge detection (or any other program that could achieve that) and a separate subtask for grammatical or lexical analysis etc. Each task is self-contained and well-defined, correctly readable to humans as well.
This model replaces (or rather adds on to) the regular grammatical or statistical models used for Artificial Intelligence and Artificial Language Analysis (like GPT3).
When you have a request for an action or task or response, the Overstander would use a highly specific set of Overstand models to dissect the request and model it with a set of basic possible "understandings" from inside the topic.
It then would navigate along these understandings in parallel, in order to come up with an intelligent solution or decision, or action, while on the way asking relevant clarification questions bringing the computer to its ever better understandings, which could be worded or shown by its decisions and reactions.
The result would be that we could "fully" understand what is going on in the big picture.
We could start off with a kitten model for reaction to seeing a moving animal like a lizard, snake or cockroach, starting with instincts and preset responses.
Then move on to Overstand models for doing OCR of handwritten Malayalam or Cuneiform, or Syriac texts, along with all the resources a human would have at hand for that task.
Failures would be analyzed and pointed out to the Overstander software, and thus the model's subsets could be corrected.
Basically, the difference between this and any plain old neural network is that there are human-defined subtasks that help define what is needed for responding correctly to the request.
As an example, reading an ancient text would need the following knowledge modules: 1. Document source types (images vs text), 2. Resources for finding prior analysis of this text, recognizing if it has already been previously analyzed and deciphered, 3. Letter sets and fonts. 4. Recognizing the direction of the text, and the lines of the text. 5. Dissecting the letters and comparing them to each other and to other writings. (Creating dictionaries and lexicons on the way). 6. Understanding the content of the text: Which itself has many subtasks to be identified and defined. (For example: finding quotations, finding the topic being discussed, finding the place and time it was written. Finding out the borders of our knowledge in each one of these. etc. etc.
Oh, and in more advanced versions the Overstander would also want to know WHY we are asking this or that particular request, and what we wish to achieve.
In the most advanced version, it would have its own opinion too.
It could take a long time to build even the simplest model in this fashion, but as time progresses, there will be an automatic model that learns how to model. Learning to learn...