Public: Government: Alternative Forms
Consensus   (+2)  [vote for, against]
May the consensus roboclerks roll.

When the coalition cabinet of one's own country adopts a "sense of the meeting" decision-making procedure instead of voting, one has to take notice.

As I understand the procedure, "motions" are not fixed, but after "floating" by the chairperson are continuously modified until nobody present objects to the wording, which is immediately entered into the records as a decision.

Apparently UN recommends the procedure.

Apparently also, the major difficulty encountered is not personal obstinacies but finding a chairperson who can put behind "him" the former "chairman as boss" role and become "chairperson as interpreter".

Significantly, women are very good as consensus-seekers, witnessed by the increasing number of them using the procedure in politics and on business boards.

Meanwhile - if consensus is better than voting, but we males can't hack women taking over, has the time come for the genderless roboclerk of science fiction to roll up - millions of'em.?

A feeble idea, but dollar for dollar I see better prospects of peace resulting from this kind of research than on the Satellite Wars being prepared for in several countries.
-- rayfo, Jun 08 2001

Legit ut clericus "He reads like a clerk" http://www.scholars.../otheo/3.1.read.htm
From a time when the requirement to read Psalm 51:1 could save you. [reensure, Jun 08 2001]

OK getting this straight -- is the idea to develop automated systems that interpret consensus findings in meetings? Hmm. AI research has more vital things to strive for, like interpreting social health and welfare data and beating Kasparov, but generally speaking this may be as good a use for it as any. I suppose general AI efforts could eventually lead to "machines" that could perform politically interpretive tasks, like our own neurological "machines" can today -- but who's to say that, if the female neurochemistry is more apt to consensus interpretation than the male, the "machines" we produce to do the same are going to "magically" emulate that persuasion? The "machine" that beat Kasparov emulated intuition in a surprising way -- its developers were not programming AI, in the traditional sense (pattern interpretation), but seemed (and by Turing's definition, seeming is being) to achieve it. In game one, Deep Blue beat Kasparov with its sheer ability to calculate. It knew for certain, simply because it had the raw processing power to calculate it, that in not protecting its badly threatened King in favor of collecting an extra pawn, it still had time to protect the king, and that the extra pawn would pay off handsomely down the road -- and it did. In game two, Kasparov realized he could not think as thoroughly as the "machine", so he played less aggressively and thought to give the machine no room for sheer tactics. Machines, with no calculable basis on which to establish the value of a decision, will just pick one. The human ought to be able to win under those conditions, because the human can make intuitive decisions based on familiar patterns and intelligence. Deep Blue was not programmed to interpret patterns in that way. But to everyone's surprise, the decisions it made did "seem" to indicate an intuitive feel for the patterns. Certainly, it could have been coincidence, but the appearance of intuition, as Turing -- I believe correctly -- suggests, is all we can really point to as proof of it.

The point is, this event, this magic crossover to apparant AI from sheer computing power, happened on its own, unpredictably. So what we eventually develop from our AI research will likely be unpredictable nincompoops. Sure they can beat Mr. Kasparov in a game, but they will probably be just as biased, for whatever reason we are, as existing "intelligence", and we will find ourselves asking our androids, "What the f**k were you _thinking_?!"
-- globaltourniquet, Jun 08 2001


Very well put [gt]. On the flip side, remove AI from the equation and stick to the basic computational power of comparison. If A=B and A=C then...Computers can be programmed with the ability to remember things that humans tend to forget. As we all know, one of the biggest problems with humanity is that it fails to learn from mistakes made in the past. If you program a system with enough historical data, it should be able to compare what is presented to it and either accept or deny it on the basis of prior outcomes. This is what they did with Big Blue. If you throw enough data into the system, you can ultimately predict the outcome (99.9% probability). There will, of course, be that time or two when we will find ouselves asking, "What the f**k were you thinking?!" But...don't we already do this on an almost daily basis with politicians?

Just playing the Devil's advocate.
-- Reverend D, Jun 08 2001


I get you, Reverend. Because the "machine" is better at learning from the past, it will be more likely to consider past error in current judgement calls than our illustrious politicians are. All the more then, I maintain that what you will be getting will by definition be at least as, if not more, unpredictable as your average human wanker. The improvement you identify alone will probably make it the most baffling lawmaker in history. At least with humans, we can ask "What the f**k were you thinking," and get a satisfactory answer, like "I dunno." The machine will answer that inevitable question by describing such manifest obscurity that you will just want it to shut up. Talk about pedantry.
-- globaltourniquet, Jun 08 2001


But what of the political bias of those directing the programming of the roboclerks? Instructing the 'clerks that "Decision X made by Y on Z date is now thought to be a mistake" won't always do, as there's no strict consensus on history. The lesson learned from building dams that provide hydroelectric power but upset salmon spawning patterns, for example, depends on one's political and environmental leanings. You'd essentially be teaching the robots history, but whose version of history would be taught?
-- snarfyguy, Jun 08 2001


Which underscores my point. How does your neurochemistry come to one or the other conclusion regarding value? Is it any different than if a machine, programmed to make selections based on situational valuation, decides to do something? Who knows? Who taught _you_ the value of salmon spawning patterns?
-- globaltourniquet, Jun 08 2001


Thanks all for taking my feeble idea seriously.
-- rayfo, Jun 09 2001


rayfo: Thank you for bringing to light a staffing trend to which I've not attached much importance. That is, the difference in competancies by gender. I thought equality of the genders was being pushed as a 'leveling' technique of social engineers, but I'm beginning to understand now that it is a outgrowth of an evolutionary advantage given to each gender. Bravo for reading more than I into this! As is said, "Before good news gets its shoes on, bad news has traveled around the world."
-- reensure, Jun 09 2001


Whooooa there. I thought the original idea was about the concept of consensus-reaching against voting procedures? I happen to be the chairperson of a national student affairs committee, and I know from first-hand that women chair meetings better overall. Certainly with the (overwhelmingly male) committee that I deal with, the most useful thing I do is to keep the discussion always coming back to the point, and I do that by constantly reminding the guys where the consensus is. In the end, we have to make policy by voting, but most discussions didn't have a paper to start off with, so the question on which we vote has been generated from my interpretation of the consensus.
On the question of whether this should be used in national government, you have to ask yourself whether the 'chair' (in the case of the UK, the Speaker of the House) is competent, reliable, and unbiased enough to generate a consensus interpretation when dealing with a very large number of debaters (650+ in the UK, and the Speaker belongs to one of the political parties). I'd say it was doubtful. Nice idea though.
-- lewisgirl, Jun 10 2001


If the speaker belonged to the party with the majority then it would be the publics choice as too who was consensusing(?). That is of course based on the idea that you dont want to replace elections with a national consensus system (which would make elections even more insane - remember to cast your ideas in 2001, its poling year)
-- RobertKidney, Jun 10 2001


Quite.

Last Thursday, 24% of the UK eligible electorate voted for the Labour Party to remain in power. This, astonishingly, was a landslide victory, since only another 23% of the eligible electorate chose to vote for other guys. Since 76% of the country did not choose the party to which the Speaker belongs, I would be very unhappy with a system under which decisions on my country's laws, public spending, foreign policy and taxation were reached by an unrepresentative member of an unrepresentative party interpreting the consensus of a discussion undertaken by 650 other unrepresentative members of unrepresentative parties. Capiche?

However, since rayfo's idea was about the Cabinet, not the entire Parliament, I could go with it, since Tony, Jack, David, Robin, Peter et al have arguably less input into decisions on my country's laws, public spending, foreign policy and taxation than the unrepresentative unelected members of that political party's behind-the-scenes think-tanks, all headed by some unrepresentative unelected political gamesman called Alastair. So it's the status quo really.
-- lewisgirl, Jun 11 2001


But the idea here was to have robots (androids? AI systems?) do it, rather than men or women. Which leads us back to the intelligence issue -- can a machine analyze disparate opinions and synthesize a consensus? My contention is that arguments against it are no more legitimate than arguments against having a man or woman do it, because we can no more determine why a human comes up with something than we will be able to determine why a machine does, if it is truly AI. Simply by definition.
-- globaltourniquet, Jun 11 2001


Regarding the fact that we will get no better results from a bot gov't than from a human one, I am in complete agreement with you. I am assuming true AI, however, meaning an intelligence indistinguishable from human, when I say you will also get no worse. I agree about the information problem, but I am past that. I am saying, if it is possible for AI to spawn itself, if it is true intelligence, how does it differ from what we have already (i.e. gov't by humans who what-the-f**k-are-you-idiots-thinking)?
-- globaltourniquet, Jun 12 2001


GT: If that's what rayfo's idea is, and I don't think it is, then it's baked in countless works of science fiction. Otherwise no, there is no reason to think that true, engineered intelligence will be distinguishable from our own. Indeed, that is very the test, is it not?

But we digress...
-- snarfyguy, Jun 12 2001


I think UnaBubba will assert that it is impossible for AI to spawn itself, because that would be synthesized intelligence. If I read correctly, globaltourniquet places the machine-as-human distinction under an appropriate constraint: \\indistinguishable from human.. no worse..(than) we have already\\. That is in line with what Reverend_D stated above: \\humanity...fails to learn from...a system with enough historical data...to compare...and either accept or deny it (based on the acceptability of the input). \\

rayfo is suggesting that consensus building is a more reliable method to test input from any source, human or machine. Roboclerks could be programmed to elicit input and automate a process of validation and returning to the point of order, as per lewisgirl \\ to keep the discussion always coming back to the point...constantly reminding (the cabinet's human element) where the consensus is\\. This form of automation would be more efficient than human counterparts and not subject to bias. AI in practice would distort the process if a machine's internally generated (normalizing) data was entered into a recursively built consensus. Since the machine would be speciallized to build consensus, it would be unable to determine its own best interests and would have to rely on outside governance to act on its decisions. (Executive decision)

Willing public servants would become more scarce if brainstorming in a round robin fashion were to constitute their workday.
-- reensure, Jun 12 2001


The ideal solution would be to have AI which was not subject to bias and could be more eficient do the work - come up with all the posible options - and then have elected government members check the results and vote on them. This way complete rubish cannot sneak in (much).

As for the elections if half the population do not want a say in the government of the country that is their problem. They canot complain if someone they dont like gets in and they certainly shouldnt be able to claim that the government is not representing the views of the people.
-- RobertKidney, Jun 12 2001


The Turing test is, basically, ask a machine to respond to some stimuli or perform some task, and if it acts in a way that makes you think it's human, then it is artifical intelligence. Because we have no basis for determining how we come up with our notions, we cannot say that the way a machine does it is any different if the resulting behavior is human-like -- perhaps that our notion-machines are more "sophisticated", but nothing more. But UB's comment makes me think of the possibility of asking the question, "Um, just how artificially intelligent are you, Chairman Robot, seeing as how you believe aliens must be eradicated to prevent abduction?" This may not be dissimilar to questions of intelligence we might ask of our fellow flesh-based notion-machines -- even some of those chairing political committees.

But regarding systems that are as reensure suggests, that is

//programmed to elicit input and automate a process of validation and returning to the point of order //

without the distortion of AI (which is indeed, as already described, the same thing as the distortion of flesh-based thinking machines), my response is, wull fer sure, it would be more efficient. And WIBNI. Perhaps I'll start programming, and I'll be done in, oh, say, never. I'd finish my database of everything first.
-- globaltourniquet, Jun 12 2001


Why does it have to be one system or the other? Why not both? That is what the Bakery does...Sorry, go fish on your baked idea
-- Around TUIT, Aug 13 2004



random, halfbakery