h a l f b a k e r y
RIFHMAO
(Rolling in flour, halfbaking my ass off)

meta:

account: browse anonymously, or get an account and write.

 user: pass:
register,

Artificial Unintelligence

For use by governments, military and big business
 (+1, -2) [vote for, against]

Using the latest technology in artificial intelligence and super-computing, create software to make the worst possible decisions.

These decisions can then be used to gauge how dumb some ideas are, in order to prevent actual people making bad decisions. The proposals could be input into the computer then assessed on the stupidity quotient of any particular idea.

The computer, after crunching numbers and various related inputs, could then determine the SQ (Stupidity Quotient) of the idea - naturally the higher the SQ, the dumber the idea is.

For example, one could enter 'Build a wall between Mexico and USA and make Mexico pay for it'. In this case, using the huge database of information the computer would assess the possible outcomes. After a few minutes crunching some numbers and extrapolating the results, this idea would receive an SQ of about 140, or genius level stupidity.

In the case of the input 'Build a wall between Mexico and USA' the AU might deliver a score of 120, or decently stupid level.

In the case of 'Build a wall around Trump Tower' the machine might explode or give a negative result. -20 or maybe an OK idea.

The mechanism would include looking at historical data, using the outcomes of similar decisions, to assess the various likelihoods of different scenarios. The quantifier, for the ultimate calculation of the SQ, would use many different simulations, substituting the values of the status quo, into the determinate factors of the historical data.

Upon each iteration the score would accumulate, resulting in the total SQ value. In the case a certain outcome was bad historically, a positive value would be accrued, and similarly for a good historical outcome, a negative value could be accrued.

A simple algorithm would not suffice, and like artificial intelligence the algorithm could self modify and adjust according to various situations.

For example, if one were to ask AU if it is a good idea to heat water in the microwave using a metal cup, the program would look up the historical references on this scenario, and also investigate some of the science behind why doing such a thing, may or may not be a good idea.

Using reasoning and the ability to adapt, if the historical data does not exist, the computer would attempt to simulate the situation using science and mathematics. If the goal (to heat water) cannot be achieved without causing significant harm to the tool (microwave/cup) then the idea can be assessed a positive score, however if in the case of no harm being caused and the goal being fulfilled, then a negative score can be accounted for.

For a simple task such as heating a cup of water in a microwave, the computer would give a resounding positive score, may be also about 140-160 SQ, indicating that it is indeed a bad idea.

For the way more complex situation of border control, a great number of inputs could be assessed in a similar manner, after each input has been balanced and checked against historical data and artificially learned data - the score can be accumulated and assessed accordingly.

 — AngelEleven, Nov 26 2015

[Inyuki, Nov 26 2015]

Just elect Trump and there is no need to build anything.
 — xenzag, Nov 26 2015

Yeah, Mexico will be complaining of illegal immigrants from the USA! Maybe Mexico will pay for the wall after all. It is these possibilities that the computer would use to determine the SQ, as it weighs up causality and probability.
 — AngelEleven, Nov 26 2015

I think this is [marked-for-deletion] magic. It's not enough just to say that a big computer will work it out - I'm not sure there's any known way to do what you describe, so you have to give a plausible mechanism.
 — hippo, Nov 26 2015

fixed
 — AngelEleven, Nov 26 2015

 Plausible mechanism...

Preferably with brass gears, and a mahogany case..
 — not_morrison_rm, Nov 26 2015

So "Artificial Brilliance" would score infinitely low on SQ scale, right? Oh, that reminds me of the SF score by Paul Erdös.
 — Inyuki, Nov 26 2015

 //If the goal (to heat water) cannot be achieved without causing significant harm to the tool (microwave/cup) then the idea can be assessed a positive score//

In politics, either the tool or the goal (or both) includes humans. Humans make their own value judgements (in ways that are not closely correlated with their degree of stupidity). It sounds a though this idea is a not-very-subtle way of over-riding those value judgements with your own value judgements, which are embedded in the algorithm as soon you start defining "worst".
 — pertinax, Nov 26 2015

The mechanism is non-sequitur to the idea/rant.
 — FlyingToaster, Nov 26 2015

"just add AI". If this were something simple (such as recognizing the sound of popping bubbles as in one of my ideas) or already widely done (such as using edge tracing to recognize very similar objects) I wouldn't have a problem with calling for AI. But this project would require human level intelligence. You could similarly create ideas for an AI accountant, attorney, president, scientist, librarian, etc etc etc. Just name a problem and say "add AI".
 — Voice, Nov 27 2015

 Not new.

 What is termed "artificial intelligence" is simply the application of fuzzy logic over given inputs and assignment of a scalar value to the result.

One evaluator may decide that a score of 100 is better or more desirable than score of 50, while another may prefer 50 over 100. The AI engine does not judge - that's the prerogative of the evaluator.
 — Tulaine, Nov 27 2015

I think we have this already in the United States Government.
 — travbm, Nov 28 2015

i was going to take a gun to school monday, but my homemade stupidity simulator only gave me a score of 3, so i decided it was too clever an idea to waste on the stupid masses
 — Edie, Mar 01 2018

 [annotate]

back: main index