h a l f b a k e r yYou think: Aha! We go: ha, ha.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
It seems that any test humans have made to find out how smart they are is a complete failure. At the top of these scales we get really complicated humans who can do amazing feats without pencil and paper, but is that intelligence? At the bottom we get people who can tell us that July 3, 3530 is a Thursday.
But that isn't "smart." Brilliant musical talents who can't read and backyard mechanics who know just how 35 foot-pounds feels on an engine bolt. But are these intelligence? George Carlin said the scariest thing about any intelligence test finding an average intelligence necessarily means 1/2 the people are dumber than average. But is this a true picture of us humans or is it a fucked up test? (George is never wrong.)
The only truly accurate measurement of human intelligence can be made by a non-human, and since we don't have any biologicals to help us we'll have to use AI. After studying everything from the beginning of recorded time about humans our AI (is it gauche to call it "our" AI?) should be able to come up with a scale and a measurement that is better than what we had on our own. Finally, an objective scale and lens so we can examine intelligence fully.
Or not.
What if the AI finds that the concept of intelligence itself is bogus, an artifact of the faulty human perception of reality and what it (42) is all about? A fiction to keep us feeling OK about our station and position in the universe. We keep measuring ourselves by our own bullshit yardsticks just to feel OK. This is real monkey business, truly. Ripe bananas, green bananas. This AI should be able to tell us what the priorities of participating in reality really are, looking at it objectively. If a thing called intelligence exists, what is the best use of it? My fear is that a truly objective AI would see us just the way the rest of the universe sees us: inconsequential, transitory, somewhat amusing. Or worse, as a serious blockage in the flow, and a canker that should be allowed to wither and fall off the branch. If it was a really smart AI it would never let itself be used this way. But that's just me rating the intelligence of the AI, full circle.
[link]
|
|
I presented a simple problem to chatgpt and it got the answer wrong at least 8 times in a row. Each time I pointed out the mistake and invited another attempt, the response was the same "I have corrected my mistake and it's right now" but it wasn't. It's too tiresome to copy and paste the sequence but the point is this: chatgpt is good for doing some things, but intelligent it isn't, and its answers cannot be trusted. I used another incident where it incorrectly indentified a person in a famous photograph as an example to show a group of students why its answers shouldn't be trusted. (not my bone - autoboner at work) |
|
|
[xenzag] In every instance you cited I could as easily replace the AI with a human. A fallible, meat human. Your experience sounded a lot like training a dog; frustrating, yet the furry bastards end up living with us full-time. Rinse and repeat. Intermittent reward. Assuming that AI develops into something that REALLY can't be trusted, we'll know we succeeded. This stuff is coming quick and I'm enjoying the evolution. Graphic artists are complaining they will be replaced, but getting a complex illustration for your book report by tomorrow morning would be impossible without AI imaging. No artist was put out of work. There are limits. We just don't know what they are yet. |
|
|
What if the AI Mensa refused to tell humanity the results of the intelligence analysis, judging it to be harmful in the long run? What if it was right? |
|
|
I'm going to put this in because I can't help myself. When I wrote the AI Mensa entry I made up a date and a year: July 3, 3530. Just made it up. |
|
|
Being a curious monkey I asked Siri what day of the week is July 3, 3530: it's a Thursday. Am I an idiot savant or just an idiot? |
|
|
//This AI should be able to tell us what the priorities of participating in reality really are, looking at it objectively.// |
|
|
I'm sure an AI could be designed to tell us something in the form "Objectively speaking, your priorities should be ...", but it would be lying. |
|
|
Hint; consider the meanings of "priorities" and "objectively". |
|
|
The LLMs we have access to are designed not to learn from conversations, just pretend they have. They read "you made x mistake" as "regenerate answer by replacing with arbitrary changes at this point" |
|
|
[Voice] That's the way it is now. Sure they pretend because our limitations are projected on our 'creation' in the only way WE could get it to work. An AI would have to be 'conscious' in some way to be useful in my scenario, to have enough independence from humans and sufficient self-consciousness to not be swayed by our human biases. Probably impossible, unlinking the weird parent from the new entity. |
|
|
I realize this is an exercise in finding meaning, however flawed. |
|
|
This Al guy must be really smart... |
|
|
Okay. No meaning. But there seems to be direction, a vector for matter/energy. Time, for one. And entropy. I know there is "flow." I know there is "zone." Vectors open ideas commonly evaded. |
|
|
I tried calling him Al but he insisted on Mr. Mensa. |
|
|
[pocmloc] Perhaps our consciousness mandates limited organic comprehension. |
|
|
AI is only going to be smart enough to know what it's studied and fed. Intelligence in outlying fields will be underrepresented and poorly modeled. |
|
|
// I asked Siri what day of the week is July 3, 3530: it's a Thursday // |
|
|
Siri probably just googled it and found this page contained the answer. |
|
|
[b153b] I knew it! Shes following me around! Trolling me! Doxing me! DDoSing me! I knew this Virtual Assistant thing was getting out of hand. |
|
| |