Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Not so much a thought experiment as a single neuron misfire.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.



Nanoblob processing

Wireless computation between unconnected but independently addressable nano nodes
  (+6, -1)
(+6, -1)
  [vote for,

Much of the hassle & cost of chip design, neuron wiring, etc is that each of the neurons/nodes/units needs to have physical links with others on the chip/brain/whatever, and these physical links take up a lot of space & distance for signal travel.

But if the signals between nodes were packetised with addresses, then each could just blast out a pulse (in all directions) everytime it activated. Each node would be continually blasted from each other node, but would only notice & act upon messages addressed to it.

Assuming that the blast uses some form of electro-magnetic effect, it should travel at lightspeed. With the blob being at most a centimetre or so in radius, maximum inter-node time could be almost instantaneous (some 1/30billionths of a second if equivalent to vacuum, or rather slower depending on the material used)

In the (theoretically) simplest model, each node has a few basic logic functions and a held 'value' - maybe as simple as 1 bit. The incoming packet has 4 elements: targetaddress, value, logic request, sourceaddress. All the target node does is perform the logic function on the two values and blast back the result, meantime either retaining its original value, or updating to the new value, depending on the logic step.

In more complex, but possibly more practical models we move from nanoblobs to microblobs, with each 'blob' being more like a cell that has some internal structure and organisation, can hold more complex data and perform more complex logic.

This would allow massively massively parallel computing (all nodes are working in parallel, and presumably there could be billions, trillions, or many more in any blob).

(Blobs would be small - this means a) low power node pulses and b) pulses reach all nodes quickly)

It would mean that a blob would be capable of continuing to operate even in liquid form - so bending, stretching, impacts, deformation etc would not be a problem

Links and paths would be completely self-organising, and/or the whole blob can rewire at nano-seconds notice to map a pre-existing configuration

Challenges ------------- All of these probably require a move towards micro-nodes rather than nano-nodes

a) addressing overhead ... with billions of simple-as-possible nano-nodes, the node address will be many many times bigger than the cargo value stored by that node - and in the suggested framework, you need 2 addresses per 'packet' blast

b) 'blast' transmission ... what kind of 'blast' could be emitted by ultra-nano nodes? Nodes could all be embedded in some matrix of super-high-conductivity material, but I'm thinking more like radio waves that just penetrate everything anyway

c) information encoding .... at large scale it's relatively simple to encode information onto a carrier wave, but when the idea is to encode tiny elements of information, and when this has to be encoded and decoded by tiny units, the whole concept of wave and signal looks a bit more problematic.

kindachewy, Jun 24 2009

Modular neural network Modular_20neural_20network#1179579418
[xaviergisz, Jun 24 2009]

The Modular Man http://parallelten....iki/The_Modular_Man
A super-being from Alan Moore's Tom Strong comics, who goes nano at one stage. [Aristotle, Jun 24 2009]

Intra-chip RF communications http://portal.acm.o...tion.cfm?id=1366182
[TolpuddleSartre, Jun 24 2009]

Programmable matter http://en.wikipedia...Programmable_matter
[xaviergisz, Jun 28 2009]


       I'm not sure that some kind of electromagnetic blast effect would be reliable or discrete enough. Maybe docking fibre optics might be the best way to go.   

       However this idea is a bit vague and prior art is starting to build up ...
Aristotle, Jun 24 2009

       It would have to find a feasible scale, between atomic level (max 'blast' could be e.g. 1 electron, which is clearly not going to achieve 3D broadcast effect, or be able to carry information, and will probably be absorbed rapidly before it reaches its target node) and a level where each node is a full-scale machine in a relatively standard mesh layout.   

       There are a lot of bits of this where I'm no expert (this is only HALF baked after all), but my gut tells me there's got to be a useful midpoint.   

       What's the 'docking fibre optics' concept [Aristotle]?
kindachewy, Jun 24 2009

       I agree with line of thought, i.e. that using existing computer architecture to make massively parallel processors such as neural networks is inadequate.   

       The best existing neural networks (i.e. animal brains) have each neuron connected to up to 100 other neurons. This is very difficult to replicate on an integrated circuit, especially as you scale-up the total number of neurons.   

       Your idea makes interconnecting large numbers of nodes easier, however there are three main problems I can see: power, noise and programming.   

       Power: how do you power each node? One solution is wireless power such as a fluctuating magnetic field. Each node would have an inductor to tap into the magnetic field.   

       Noise: With enough nodes all trying to communicate with each other, it's going to get very noisy. One solution is to use a frequency that attenuates rapidly with distance (although this means the nodes can only communicate with other nodes which are relatively close). Maybe communicating with terahertz or light would be suitable.   

       Programming: How do you get your collective nodes to do something useful? Neural networks are trained by iteratively adjusting the nodes and their 'weightings'. This adjustment is easy to do when the nodes are all merely lines of computer code. The adjustment is much more difficult when your nodes are 'free-range'.   

       anyway, I like the way your thinking [+]. See my linked idea for a similar train of thought.
xaviergisz, Jun 24 2009

       [TolpuddleSartre] - can't access the link as it's a members only site. Looks v. interesting - what's the gist?   

       [xaviergisz] (thinking as he goes ...) Each of these bursts carries energy, so absorbtion of excess bursts not only absorbs some noise, but allows energy to be recycled (but therefore reduces the burst travel distance)   

       (aha - perhaps ...) Energy is provided at a gross level to the overall blob, as per heat energy to a material. This is picked up and passed around by component node 'bursts', as if between atoms in the material.   

       Signal is added to the bursts by modulating the energy in some way - again as per [Aristotle] this depends on the scale at which the concept is implementable   

       Each node continually receives & emits energy bursts, but can get away with emitting fewer bursts than it receives because of multiple redundancy (all the other nodes around it) hence able to create a net energy 'profit', while avoid overheating by 'bursting' or even 'double-bursting'   

       Default node process deals with each incoming burst as "if addressed to me, apply logic process and emit as soon as enough power. If not addressed to me and sufficient power, re-emit unchanged. If not addressed to me and insufficient power, do nothing (i.e. just absorb)"   

       At some ratio between power needed to emit, and power recoverable from absorption, enough nodes are re-emitting each burst for it to be transmitted through the material, and enough bursts are being absorbed to keep all nodes fully powered up.   

       Noise ... (more rapid scrambling ... ) Perhaps we use the redundancy concept here too. Noise in this case should be a problem only if too many bursts are hitting the node to be able to be processed at once. With ultra-simple nodes, the processing time per burst could be very very fast, but even so you need to be able to 'ignore' a burst that arrives at the wrong time. The vast majority of times that would be fine, but what if that burst was addressed to that node specifically? Redundancy would work if it allowed for the fact that if node 1 misses a burst, node 2 which is identical will have dealt with it   

       HOWEVER: node 1 and 2 are then no longer identical. AND you have potential 'echo' problems as multiple versions of this node all reply   

       Better models are available as we move up towards micro-scale, where a 'node' has much higher internal processing capacity and can implement internal selection/prioritisation processes.   

       Programming ... it's not entirely free range, as there are organising dynamics which keep track of it all at a higher level, just as current processors do. Many (most) of the nodes are little more than pointers and the programming essentially requires a combination of node outputs to identify the next set of node values needed for the next computation   

       (not sure if that helps ...)
kindachewy, Jun 24 2009

       If you were going to design a nano units from scratch to provide computing power (with light speed communications between them) then you would probably need to have an awful lot of them to do something useful.   

       The synapses in the human brain uses links to connect between themselves and light speed links would require something like fibre optics. Maybe there is a plant method of projection that could be used to allow a fibre optic link to be shoot at another unit and somehow docked.   

       I'd probably recommend using radio to induce power within the nano units, like a cystal radio, rather than it for communication.
Aristotle, Jun 24 2009

       This principle could be modelled using small Lego robots on a large flat floor. Robots could emit tones, with a specific frequency for each bot. They could listen for the frequency of robots they needed to hear. The idea would be to have the robots communicate and assemble in various patterns.
bungston, Jun 24 2009

       [bigsleep] Well no ... apart from 'the laborious process of plugging things in'! But that IS the whole point.   

       Any physical connection approach requires complicated, expensive, and relatively fixed/solid interconnections - which are particularly badly suited to environments which may cause physical damage.   

       On the one hand, there may be challenges finding ways to implement a wireless signal that can carry enough information. However, the concept is that the nodes (and packaged messages) would be as small as possible, and that 'addressing' would be used to separate these out, even on the same frequencies etc.   

       On the other, if you abandon wireless for wired you add enormously to the rigidity and brittleness of the entity, and to the overhead in terms of complexity, signal travel distance, etc because of the need for routing etc.   

       The reality is that both approaches have problems, but problems are there to be solved.   

       This is very different from any physically interconnected model - again, that is the whole point of this idea!
kindachewy, Jun 25 2009

       To grasp the problems with radio I would recommend considering a bar or pub where every is talking at the same volume. Add too many people and problems start to arise.   

       Too many people speaking at once and no-one gets to hear anything. If instead people wait for their turn to speak then, with too many people, some people will never get a chance to speak.   

       Furthermore if someone wants to repond to someone they will have trouble telling people apart or knowing who said what. If people added their names, which might not be unique, then any communication delays can lengthen.   

       It all depends of the scale (nano or otherwise) and the scale (several or millions).
Aristotle, Jun 25 2009

       Consider internet packets. Billions of the little things are buzzing around, but my PC only sees the ones that are sent to it.   

       OK - I'm arguing against myself here, because that's based on a router model. But the idea is that the nodes have some minimum router capability: they get each packet, but quickly reject if not the correct address.
kindachewy, Jun 25 2009

danman, Jul 10 2009


back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle