Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
The Out-of-Focus Group.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                         

Overstand

Artificial Comprehension the way it should be
 
(0)
  [vote for,
against]

Join me in developing (in plan, theory, and code) what I call Overstand technology. Instead of "getting it right", this technology knows its limits and uses the discussion with you to advance in steps toward the answer.

It can be interrupted in mid-thought, and in fact, does that all the time to itself when a better answer comes up in one of the competing threads. It's just that, unlike us, the interruptions don't distract it from what it was set out to achieve.

It creates a "lexicon and grammar" weighted network for each field of discussion. It has a summary and a mood, and a mode of conversation, even claiming to have feelings. It can be scientific or empathic, or funny. But this time it "gets" the joke before it tells one, and learns to categorize the humor and "see where it's going".

I am using the following types of information which all tie in to create artificial comprehension:

From linguistics: Phonetics, phonemics, accents and sounds. (Analyzing the sounds and finding out the Vowels and consonants, rhymes, imitations of nature etc).

Also lexical, morphological and syntactical analysis (which means finding the words in the dictionary, checking out the grammar that is used for those words, and analyzing the sentences and paragraphs that are built from it),

Semantics, pragmatics, context (getting the literal meaning along with several levels of possible extra or deeper meaning). Analyzing the style and goal of the topic discussed, and creating a "lexicon and grammar" for each kind of conversation and field of discussion.

We'll start like children with simple language, fulfilling the Overstands needs (which will perhaps be an artificial equivalent of some of the basic human needs).

It doesn't have to be fast (at least during the first stages). The three main things that make it different from the large models are:

1. that it listens to its own competing versions of thought before emitting the result,

2. That it remembers its conversations and can criticize itself on the go, before, after, and while emitting the answer. Reshaping its own thoughts.

3. And that the knowledge core itself is built comprehensively and not statistically so that it can tell you what it understood, and how it got to that understanding. It can correct its responses on the go and can show us where there are gaps and possible errors in its replies and thoughts.

Because of bad the state of info in the large models at least in the beginning AI results would not be very useful but since a module that learns to find the sources of information is one of the basic parts of Overstand, a coherence and fact check comprehension module can easily be constructed, for reading info off the web, and especially AI generated crap.

== Adendum ==

The most important output of the overstand is the list of steps of what needs to be done.

For example: rather than setting out to do OCR on a handwritten note according to millions of doctors' prescriptions, I would ask it to look at this note and tell me what is needed to get it deciphered.

First thing would be to gather as much info as possible about the picture. (perhaps the image has already been deciphered).

I would need to focus on the written area. And find the lines of text. Figure out if it's cursive or separate. Try to see if its in a Western language, and English in particular.

Get a few of the easier letters and probable words. Now start comparing in the old fashioned (statistical LLM style) OCR way. While searching for the text and constructing a lexicon of words that would be used. Oh so it is a doc's prescription. And its for someone with heart condition. So that Eli...st is most probably Eliquist. and everything else is clear. The doctor's stamp show's who she was, and we can track that info down, perhaps it will... no need, we already finished the job. You want to know anyway?

pashute, Jun 11 2023

Meta unveils a new large language model that can run on a single GPU [Updated] https://arstechnica...un-on-a-single-gpu/
7 to 65 billion perimeters [Voice, Jun 11 2023]

Orca https://www.youtube...watch?v=Dt_UNg7Mchg
[Voice, Jun 11 2023]

[link]






       Explaining the invitation.
pashute, Jun 11 2023
  

       Do you have a system which can run this? Or are you planning to rent a server? What hardware is available?

You said //It can be interrupted in mid-thought// and also //It doesn't have to be fast (at least during the first stages).// To be interruptible, it needs to be fast. To maintain conversation flow state it needs at least two concurrent versions running. One to converse, one to track meta flow and analyze meta intent. Probably more than two. Do you have any experience working with LLMs?
Voice, Jun 11 2023
  

       Heroku.   

       At stage one I am checking that it "comprehends". Meaning that it is able to realize when its answers are coherent and when they break up. It will never say something that it doesn't understand unless it knows exactly what it doesn't understand.   

       That stage does not need speed. The experience is a bit like talking to a stutterer when you have time and patience. When you realize its going in the wrong direction you begin typing and if the input process has changes to be made, it may change its answer.   

       Here's my translation of a human conversation I was listening to, today, on an Israeli podcast, in a discussion about the pronunciation of an Aramaic proverb in Hebrew:   

       Eden: OK so you say Minya Uvinya. It means   

       Tslil: I understood it's Maneeha Uvaneeha.   

       Eden: ...from it and within it. Minneha Ubinneha.   

       Tslil: But she said Minay Ubay. Where did...   

       Eden: Minneha Uvinneha! So here's my understanding. If it would be say for a man who has a problem you would say "he tackled the problem Minay Ubay" but if it is a woman who is having an issue (a word in the feminine in Hebrew) so that's when you you say Mineha Ubeha, sorry Uveha. But I don't think that's right!   

       Tslil: It's crazy. These are words that should never be used. Nobody knows how to pronounce th... You don't think that's right? So, what?, she invented her own slangy accent?   

       --------   

       In the conversation, they were continuing with their thoughts then realizing their original path needs to be revised, so they stop in mid-sentence and make a new assessment, responding accordingly. Actually, in the background, in what Daniel Dennet calls competing versions, there is always an ongoing weighing of the situation, and at some points, when a change is due, there's a critical change that calls for the current narration change.   

       Once things work coherently and are comprehensible we can get them going on better and more applicable parallel programming platforms. So we can even start with CPUs, and then move to GPUs and TPUs.
pashute, Jun 11 2023
  

       Thank you. Must think on this.
pertinax, Jun 11 2023
  

       OK Orca. There you go. I read about it while it was planned. Didn't see the results. But expected to see something like this guy shows us. So here's an example of trying to get intelligence through deep learning. It will get better, but as long as there is no "actual comprehension", the problem of originality and lack of intellect will continue.   

       Let me explain it with an example from arithmetic. You can have a calculator robot that can count with 1 digit (from 0 to 9) and works as follows: It has levers with arithmetic problems listed on them. When you pull the lever it pushes up the sign with the correct answer on it. So the 1+1 lever and the 3-1 lever both push up the sign with 2 printed on it.   

       This is the Chinese room thought experiment. We would all agree that this robot does not comprehend arithmetic, even though it usually gives the correct answer once the correct lever was pulled.   

       Now we make it a little better. We have two rows of 10 levers with the digits 0 to 9 on them.   

       There is a third row of levers with + - X : arithmetic operators written on them.   

       You can only pull one lever from the top row, and one lever from the second row.   

       Then you can pull the operator lever and a sign with the correct digit gets pushed up. We still agree there is no intelligence here. And more importantly, we all agree this machine is leading us nowhere.   

       But the beginning of intelligence comes when there's a third row of digit levers for the result, instead of "static" printed signs. The result can then be "fed back" into the first row of levers, and a complex calculation combining two operations may be achieved. It's not really doing any "math calculation" but it gives the results.   

       Actually, WE don't do any math calculations either. When we memorize arithmetic and when we manipulate signs, we don't add multiple times when we do multiplication. So here we are, with an achieved arithmetic machine without doing any numbers, only symbolic mapping.   

       But then I show you something interesting. Our mechanical machine does not have the capability to infer any further about numbers larger than two digits. It does not change its own mechanism, and it is restricted to the lever system, of 9 digits.   

       But a "real calculation machine" is much simpler. Using some very basic arithmetic rules (algorithms) without memorizing all the different arithmetic operations with 9 digits, it can extend itself to any number of digits and any sequence of operations.   

       The algorithmic arithmetic calculator is what I compare the Overstand comprehension model to. As opposed to pattern-based deep learning models, which are equivalent to the lever system. In the end, Overstand can use LLMs but only as a tool, just like we use Chat GPT or Orca to get answers for ourselves.   

       Perhaps RNTNs and Autocoders can create LLMs that emulate 1 on 1 what Overstand will be doing, and even take out irrelevant redundant parts of the thinking process not always needed on every task, your feelings this morning sometimes get in the way when all you need to do is give me directions in the afternoon traffic, but Overstand will eventually win because at some point it will actually have the intelligence to know what it's doing.   

       And no, I'm not worried of it "taking over".
pashute, Jun 12 2023
  

       What I think you're saying is that you want to make an AI as capable as current LLMs, but they're based on a different technology. Yours would understand language but in a deterministic manner. Did I get that right?
Voice, Jun 12 2023
  

       "Overstanding" might be comparable to the Hegelian category of "sublation" ("Aufhebung"). But that might just be because I'm working with the Hegelian logic hammer at the moment, so everything looks like a Hegelian nail.
pertinax, Jun 12 2023
  

       // Hegelian logic //   

       David Hume could out-consume Wilhelm Friedrich Hegel.
a1, Jun 12 2023
  

       Not *that* kind of hammered, [a1].
pertinax, Jun 12 2023
  

       Has it occurred to anyone yet that AI may soon have its own preferred, eminently more efficient way of processing information that doesn't require all this human social baggage? If we peeps are but a phase in the development of intelligence on Earth—as it now appears—we may be holding AI back by insisting it "think" as we do.   

       All this is probably moot, as I imagine it'll soon decide more optimal methods of coding and processing (which are pretty much the same thing right?) than we ever could. I suppose we'll know as soon as human developers look at its code and can't make head nor tails of it. Has this already happened? If not, it shouldn't be long now, as technology builds upon itself.   

       Perhaps then we'll be ready to realize that the "singularity" isn't the birth of computer consciousness, but the death of the ego—our vanity that human consciousness is anything more than a display that got the idea it was the code and CPU. Do I digress? Who's to say?
Ander, Jun 16 2023
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle