I have been mulling over my neural networks idea (see link). This is a bit of a ramble and only makes sense if you have read the original idea.
A useful neural network would require a large number of modules. Producing very small modules allows a large number of modules to be connected, but also presents
some problems.
Using current semiconductor manufacturing the modules could be made very small; preferably the same size as a neuron in an animal brain (approximately 50 micron diameter)
This would be achieved by fabricating an array of circuits onto thin film semiconductors (with a depth of 50 microns). The thin films would then be cut/shattered into the tiny blocks each containing a circuit.
This would make it possible to have 8000 modules within 1mm^3. And 8 million in a cm^3, 8 billion in a 10 cm cube, 232 billion in a 30 cm cube (the human brain has roughly 100 billion neurons).
Making the modules this small would come at a cost: difficulty in manipulation, i.e. it would be extremely time-consuming to place each block individually. Instead of individually placing modules, they could be arranged randomly. Luckily randomness is a key element in designing the neural networks by an evolution approach.
Randomly arranging modules means each module needs to function regardless of the orientation. Each face of each module needs to potentially act as an input and/or an output.
Because the design of neural networks is an iterative process, the blocks would need to be assembled and disassembled many times. Separating the blocks could be achieved by making different types of blocks different densities; blocks are immersed in a liquid such that some sink and some float. The modules could also be made of different colours if finer manipulation needs to occur.
Modules of different circuit types would be mixed together. The mixture of modules would then be sprinkled over a bed of nails. The nails would align the modules into a regular cubic lattice so that the face of each module would contact the face of an adjacent module.
Each module is shaped like a 3d cross. This allows the modules to be easily aligned into a cubic lattice. (see illustration)
The small scale and inexactness of alignment would necessitate each face being only a single electrical contact. Having only a single face of contact presents some difficulties: a) getting both power *and* signal to the modules; and b) determining the directionality (i.e. whether the face is acting as an input or an output) at each electrode face.
powering the blocks
Each module face can only have a single electrical contact (because of size and alignment issues). The electrical contact must be used for transmitting the signals between modules.
So how to get power to all the modules? Wirelessly. Each module would have an embedded inductor which acts to receive the alternating magnetic field.
type of modules
There would be two types of modules - neurons and dendrites. The neurons would act to receive/transmit the information from its inputs/outputs. The dendrites would act to connect the neurons. The development of the neural network would involve finding the best way of combining the neurons and the dendrites.
neurons
The basic function of a neuron is to fire an output signal when sufficient input signals arrive.
The simplest circuit design which emulates a neuron is an amplifier. Input signals from each face of each module are combined and the sum of signals is output to one of the remaining faces. For example five of the faces are inputs while the remaining face acts as an output (Alternatively 5 outputs, 1 input). However, this does not adequately address the problem of making each module capable of functioning regardless of orientation.
One solution is have the 6 amplifiers *superimposed* on one another. So each face simultaneously acts as an input for 5 different amplifiers and an output for another amplifier. A simple protection device could be added at each face to prevent output->input feedback.
Merely having a large number of amplifiers stuck together probably isnt going to lead to complex circuitry. What is needed is non-linear components. A simple non linear device such as a diode is placed at each output.
The diode makes the amplifier fire an output signal only when there is enough voltage to overcome the diode barrier voltage (typically 0.7 V). This replicates a function of neurons - they fire in a burst with sufficient input.
Dendrites
Basic dendrites would electrically connect adjacent modules. To allow for a *learning* neural network, the dendrites would adapt/change over time.
For example, the dendrites could increase/decrease resistance over time. The more signals which pass through a dendrite, the stronger (i.e. lower resistance)the connection becomes. This could be achieved using something like varistors or a floating gate transistor. This emulates how connections in the brain work - growing/dying depending on use. The dendrites could have a resistance change that is either reversible or non-reversible.