Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
(Rolling in flour, halfbaking my ass off)

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.



Super AI

Domain specific integration modules for AI
  (+3, -2)
(+3, -2)
  [vote for,

Recently I rather flippantly asserted judgement problems could be fixed with AI voting. Here is the detailed explanation if anyone is interested -

Self learning neural network AIs are pretty much told what do i.e. try this game 1E38 times and modify your neurons to improve. For example, if its chess it will try to beat all known recorded chess games, and then play against itself until the stalemate scenario is way above the standard of a human player.

But to learn, it needs a history of games. This is the 'training set'.

It can be taught to ruthlessly win or not to offend the opponent and only win by a certain margin. That is the area that we explore here.

While learning, the AI is 'aware' of the advantage it has over the human, so it must score itself less for outcomes where a human is offended. For example choosing win in 9 moves rather than win in 2 moves, or even reducing its probability of a win to less than 100% at the discretion of an input parameter.

When we start talking about driverless cars there are all sorts of extra parameters that need fine tuning from external inputs.

Is paintwork more expensive than broken bones ? Does a closing speed into a warm object increase the risk of insurance penalities ?

All these risks are probably not known when driverless vehicles go into production. So it would make sense to open up AI decision making in driverless cars to later AI modules. But this is in fact the recipe for any Super AI - the training set must accept a number (probably more than one) of extra decision making inputs to modify an outcome as directed.

Think of it like a potential future AND gate that could reduce your driverless car insurance premiums should you load "Extra Careful Plus" AI into that extra input.

At the very least you'd want those APIs so that legally you could record when the self driving car decided to recall itself for expensive repairs i.e. decided to take you on a slow speed impact into a wall.

Intelligent systems need multiple vendors.

I thank you.

bigsleep, Jul 23 2017


       A natural by-product of this is that training sets must be published along with the patents.
bigsleep, Jul 26 2017

       A training set is only required if there is a single AI or its the first array of a particular configuration. Updating an existing AI from a new one is achieved by loading the array state.   

       The array needs to learn from its mistakes without external input --- unless that input is to indicate a new correlation between the input vector and a better output.
madness, Jul 27 2017

       Inspired by this concept, BUNGCO is going to work on the "Extra Sexy Plus" AI module such that the AI gets extra point if at the end of the task the human accepts an invitation to come up and view its etchings.
bungston, Jul 28 2017

       //A training set is only required if there is a single AI or its the first array of a particular configuration. Updating an existing AI from a new one is achieved by loading the array state.//   

       Mostly agreed, but irrelevant for this idea which uses multiple AIs of potentially different topology and an ever increasing 'first training set' comprised of all experiences with the outside world e.g. trips taken of a self driving car.
bigsleep, Jul 30 2017

       It seems like a driverless AI thing could be where all the AIed cars on the road could share the pixel-like geography of all their near misses and then update models of how to miss things even more elegantly, ie with a little more space between things
beanangel, Jul 30 2017

       Ive never met an Al that was super at chess. They tend to be great at car repair though.
bob, Jul 31 2017

       What's the actual idea? I'm missing something here. It's to add governor's that override neural network decisions?   

       Presumably current implementations of self driving cars, and other AI applications still have more than a few if then else statements. By the time neural networks can learn as a child truly learns, that would probably change somewhat, but could then probably be expressed as in Asimovs. Sorry, not seeing anything new here
theircompetitor, Jul 31 2017

       //Presumably current implementations of self driving cars, and other AI applications still have more than a few if then else statements.//   

       AI doesn't work like that. If it were all hard wired with ifs and elses it couldn't adapt to a new road etc. It's trained to learn patterns and best apply those to new situations.   

       This idea is that one vendor's AI might not be able to learn e.g. to prioritize hitting a cold object vs a body warm object. And then to be retro-actively re-trained to not smash into a restaurant vs a large dog.   

       Some AI networks are a bit rubbish so you want a collection of the best AI's using the same history of training that money can buy or that is given for free on the basis of brand development and insurance reduction.   

       Without a jury of independent AIs, a car manufacturer is putting their neck on the line that they make the best AIs as well as the best car. It all eggs and one basket. Far better to make Tesla AI a subsidiary and be happy if it succeeds rather than risk the entire brand on AI software which may need more resources to develop.
bigsleep, Aug 01 2017

       I brought up procedural code not to debate modern AI implementations, but as an illustration of "overriding" neural network decisions. I think I better understand what you meant now.   

       If AI achieves its Nirvana state, then it will be continuously learning as the network can be virtually almost arbitrarily large, and can use both other AI input and cluster arbiters (i.e. decisions can be made by a cluster of AIs within N miles from Exit 11, not just the individual car). But it's most likely such interchanges would be procedural or input data focused and not "intelligent" -- i.e. the systems will inform each other's data, not talk as in ("you think it's better to slow down here?")   

       There's not as yet a standard that would allow Google Now interact with Siri or Alexa.
theircompetitor, Aug 01 2017

       This is all well and good until someone takes their self driving car on an extended foreign holiday somewhere where they drive on the other side of the road. Three monthsworth of data gets collected before uploading to the central core. Another month and the global firmware update is shipped wirelessly and every self-driving car suddenly develops a number of behavioural glitches.
zen_tom, Aug 02 2017


back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle