I have been mulling over my neural networks idea (see link). This is a bit of a ramble and only makes sense if you have read the original idea.
A useful neural network would require a large number of modules. Producing very small modules allows a large number of modules to be connected, but also presents
Using current semiconductor manufacturing the modules could be made very small; preferably the same size as a neuron in an animal brain (approximately 50 micron diameter)
This would be achieved by fabricating an array of circuits onto thin film semiconductors (with a depth of 50 microns). The thin films would then be cut/shattered into the tiny blocks each containing a circuit.
This would make it possible to have 8000 modules within 1mm^3. And 8 million in a cm^3, 8 billion in a 10 cm cube, 232 billion in a 30 cm cube (the human brain has roughly 100 billion neurons).
Making the modules this small would come at a cost: difficulty in manipulation, i.e. it would be extremely time-consuming to place each block individually. Instead of individually placing modules, they could be arranged randomly. Luckily randomness is a key element in designing the neural networks by an evolution approach.
Randomly arranging modules means each module needs to function regardless of the orientation. Each face of each module needs to potentially act as an input and/or an output.
Because the design of neural networks is an iterative process, the blocks would need to be assembled and disassembled many times. Separating the blocks could be achieved by making different types of blocks different densities; blocks are immersed in a liquid such that some sink and some float. The modules could also be made of different colours if finer manipulation needs to occur.
Modules of different circuit types would be mixed together. The mixture of modules would then be sprinkled over a bed of nails. The nails would align the modules into a regular cubic lattice so that the face of each module would contact the face of an adjacent module.
Each module is shaped like a 3d cross. This allows the modules to be easily aligned into a cubic lattice. (see illustration)
The small scale and inexactness of alignment would necessitate each face being only a single electrical contact. Having only a single face of contact presents some difficulties: a) getting both power *and* signal to the modules; and b) determining the directionality (i.e. whether the face is acting as an input or an output) at each electrode face.
powering the blocks
Each module face can only have a single electrical contact (because of size and alignment issues). The electrical contact must be used for transmitting the signals between modules.
So how to get power to all the modules? Wirelessly. Each module would have an embedded inductor which acts to receive the alternating magnetic field.
type of modules
There would be two types of modules - neurons and dendrites. The neurons would act to receive/transmit the information from its inputs/outputs. The dendrites would act to connect the neurons. The development of the neural network would involve finding the best way of combining the neurons and the dendrites.
The basic function of a neuron is to fire an output signal when sufficient input signals arrive.
The simplest circuit design which emulates a neuron is an amplifier. Input signals from each face of each module are combined and the sum of signals is output to one of the remaining faces. For example five of the faces are inputs while the remaining face acts as an output (Alternatively 5 outputs, 1 input). However, this does not adequately address the problem of making each module capable of functioning regardless of orientation.
One solution is have the 6 amplifiers *superimposed* on one another. So each face simultaneously acts as an input for 5 different amplifiers and an output for another amplifier. A simple protection device could be added at each face to prevent output->input feedback.
Merely having a large number of amplifiers stuck together probably isnt going to lead to complex circuitry. What is needed is non-linear components. A simple non linear device such as a diode is placed at each output.
The diode makes the amplifier fire an output signal only when there is enough voltage to overcome the diode barrier voltage (typically 0.7 V). This replicates a function of neurons - they fire in a burst with sufficient input.
Basic dendrites would electrically connect adjacent modules. To allow for a *learning* neural network, the dendrites would adapt/change over time.
For example, the dendrites could increase/decrease resistance over time. The more signals which pass through a dendrite, the stronger (i.e. lower resistance)the connection becomes. This could be achieved using something like varistors or a floating gate transistor. This emulates how connections in the brain work - growing/dying depending on use. The dendrites could have a resistance change that is either reversible or non-reversible.-- xaviergisz,
Oct 28 2007
Modular neural network
Modular_20neural_20network [xaviergisz, Oct 28 2007]
http://technology.n...of-electronics.htmlthis would be perfect for the dendrites in this idea [xaviergisz, May 01 2008]
Tiny Chiplets: a New Level of Micro Manufacturing
http://hardware.sla...micro-manufacturing [xaviergisz, Apr 10 2013]
//complete lack of understanding of what a neural network really is//
I'm trying to emulate a true neural network - an animal's brain. Is there an important ingredient that I'm missing?
//modelling a top of the range Intel processor out of thermionic valves//
The whole reason I'm designing a neural network by a kludgy (for want of a better word) method is that Intel chips have proven ineffective in creating useful neural networks. I believe that there is something about messy, 3 dimensional, analogue circuits that clean, digital systems cannot replicate - intelligence.
//why 3D respresentation is not necessary//
It seems intuitively obvious to me that 3D representation gives you more 'connectivity' than 2D. And connectivity appears to be a key component in 'massively parallel' neural networks (i.e. brains).
I concede this may be completely the wrong way of going about building neural networks, but until an effective neural network is built, it is hard to say what's right and what's not.-- xaviergisz,
Oct 28 2007
I like the idea of having some modules
capable of acting as connections rather
than processors. However, physically
disassembling and rebuilding the whole
thing is very inelegant, and there should
be a better way. I'm pretty sure you
could make "cubes" sufficiently flexible
that they could switch roles in firmware
rather than in hardware. And there's no
need to go to cellular sizes - if the
thing winds up ten times bigger than a
human brain, that's OK.-- MaxwellBuchanan,
Oct 28 2007
bigsleep, I'm not trying to get an exact replica of a biological neural network, I'm just trying to take the minimum principles to make a working neural network.
Biological neural networks are amazing because of the difficult conditions that they have evolved in. In building an artificial neural network we (fortunately) don't have the same constraints as a biological system - we can use semiconductors rather than neurochemicals
//In terms of connectivity, just looking at the human brain you can assume one thing. All neurons could potentially be connected to every other one//
This is pretty amazing. My system could also achieve this by either using a long string of dendrites or embedding radio transmitters/receivers in each module.
//I'm pretty sure you could make "cubes" sufficiently flexible that they could switch roles in firmware rather than in hardware//
But how do you get the firmware update to a module in the middle of a huge array of modules? I geuss you could use radio transmission, but addressing one module out of millions would be difficult.
//disassembling and rebuilding the whole thing is very inelegant//
I envisage only needing to assemble/disassemble small groups of modules (about 1000 modules). When a cluster of modules can perform a required task it is connected to another cluster and so on. Admittedly this is all inelegant, but it is also extremely simple.-- xaviergisz,
Oct 28 2007
wow, you really don't like this idea do you bigsleep?
//Hmm. This is a troll of an idea isn't it?//
No. It'd be great if you could tell me exactly why this wouldn't work rather than just telling me that it is not a conventional approach.-- xaviergisz,
Oct 28 2007
//But how do you get the firmware update
to a module in the middle of a huge array
of modules? I geuss you could use radio
transmission, but addressing one module
out of millions would be difficult.//
Errr, no it wouldn't.-- MaxwellBuchanan,
Oct 28 2007
OK, you probably could address individual modules with radio transmission; each module tuned to a slightly different frequency. But firmware updatable modules would add complexity and size to the modules.-- xaviergisz,
Oct 29 2007
OK bigsleep, I'll summarise your complaints with my idea as being inefficient, slow and simplistic. To all these complaints I completely entirely agree! This idea is a method of designing a working neural network. It is an experiment, not a method of manufacturing a finished product. Any interesting results that are discovered can then be applied in more efficient manufacturing techniques.
The way I see it, I am trying to *emulate* the essential aspects of biological neural networks. This is a different to the current approaches which either try and *simulate* or *replicate* biological neural networks.
The simulation approach seems to miss out some of the connectivity and general messiness of biological neural networks. The replication approach is too involved in slavishly copying every aspect of a neuron, rather than just its basic function.-- xaviergisz,
Oct 29 2007
//Multi-layer networks are very hard to control, I bet you can't even find one link that describes a way of getting a 3D chunk of network to just behave in simulation//
I think you make good arguments in *favour* of this idea. Yes, designing multi-layer neural networks is difficult. Current methods of design are apparently not adequate. Yes, there are probably no accurate simulations of a 3D neural network 'chunk'.
Does this mean we should not even attempt to make multi-layer neural networks? No.-- xaviergisz,
Oct 29 2007
//To make the conclusion that lots of little dumb bits is all you need makes as much sense as making a 100GFlop PC from 100 million CPU's running at 1KHz.//
Ah, this is where we disagree. I believe that "lots of little dumb bits" is exactly how biological neural networks work and, by extension, how artificial neural networks should be designed.
This is why neural networks are good at what computers are bad at and vice-versa. And this is why I believe the computer approach to neural networks is so misguided.-- xaviergisz,
Oct 29 2007