Product: Audio: Microphone
"cochlear" microphone   (+5)  [vote for, against]
A microphone built similar to the cochlea sampling frequencies

Much less info and much more quality to pass to computer.

Send frequency domain instead of time domain info. No need for FFT's etc. Get the frequencies directly from nature. The result would be a "digital microphone" giving fantastic quality at the price of a simple mic.
-- pashute, Oct 30 2002

(?) Artificial Cochlea http://diwww.epfl.c...ik/eap/cochlea.html
[half, Oct 04 2004]

Er... what? Please explain this in more detail. My brain doesn't have the necessary information to fill in the gaps in what you wrote.
-- sadie, Oct 31 2002


Possibly a bit over our heads, but I'll give it a stab. So you use the artificial cochlea schematic as shown on the link to output frequency data to a computer's soundcard, instead of relying on the microphone to transmit a raw signal response, which has the effect of filtering the noise selectively?
-- RayfordSteele, Oct 31 2002


The link was a wild shot at attempting to find something possibly semi-related to what [pashute] may or may not have been referring to. I was hoping somebody might extrapolate, interpolate or fabricate some detail.
-- half, Oct 31 2002


OK, my guess is:

The inner ear is the shape of a nautillus shell. As it narrows it is resonant to different frequencies along its length. The tiny hairs along its length therefore pick up different frequencies. What it is, in effect, is lots of microphones in parallel, each with a very narrow frequency band within which it responds.

If you build a microphone on these principles you would have an amplitude for each frequency range rather than one amplitude that you have to split into frequencies before processing it.

I guess you get a cleaner signal because you have to do less work on it and, because each small hairy mike is dedicated to one frequency, it can presumably perform its job better.

You could test this last bit by making a woofer, mid-range and tweeter mikes and seeing if you get better quality by strapping these together than you would with a single mike.

On the principle that I think I understand what's going on despite the brevity, I'm going to tentatively award a croissant, though I'd prefer having to do less work to understand the idea.
-- st3f, Oct 31 2002


So this is about hit and miss then.
-- skinflaps, Oct 31 2002


Sorry people, and thank you half! The ear works in a different way than a mic. Was proposing to have an "artificial ear" like device which would be as cheap as a mic and may give "better quality" of audio information (behaving a bit as a soundcard with audio software).

FFT, frequency domain etc. Buzzwords of the audio industry.
-- pashute, Nov 01 2002


Hmmm, that didn't clarify this at all. I can still only guess at the way this works. <Withdraws croissant>
-- st3f, Nov 01 2002


I'll try and be clear. The mic works by going in and out with the air pressure created by sound. After being electronicly recorded, this creates a series of numbers going up and down, which is saved to a (sound) file, or worked on. This information is called: time domain sound.

The ear works by having a snail shaped tube with physical devices along it (tiny hairs and sensors) which record the frequencies that are being given at a certain time. Let's say you hear a foghorn (low frequency) and the sound causes the sensors at the outer part of the snail like tube, which is wider, to give off a signal. When you hear a violin, the inner sensors give off their signals. While we talk the frequencies change all the time, and the ear "records" these changes.

If we want to compress the sounds and save them in mpeg like files, the computer has to do some work and change the signal from the list of numbers in the time domain (from the mic) to a new list of numbers in the frequency domain, and then work on these new numbers further. This math with numbers is called Digital Signal Processing (DSP) and the particular function changing from time domain to frequency domain (and vice versa) is called FFT (fast Fourier transform).

So basically the idea is to make an "artificial ear" as a mic replacement for the future of audio.

There are several other benefits to this "mic" but I lost most of you till here anyways. If anyone reached this point and wishes to hear more, simply ask. Thank you for your time.
-- pashute, Nov 02 2002


FFT = fast fourier... I should've known that one. Been awhile since I've done 'em, though.
-- RayfordSteele, Nov 02 2002


It shouldn't be complex. It's just a tube and a sensor.
-- pashute, Nov 02 2002


Sorry, [pashute]. I'm afraid that it must be too complex for me. Is this sensor not a microphone? It has to be a pressure transducer of some type, no? I don't quite grasp how this sensor is converting the pressure wave (sound) directly to frequency domain information.

In my admitted ignorance in the area of sound processing, it sounds like this is a proposal to build a spectrum analyzer with a tube and a sensor. I must be missing something important.
-- half, Nov 02 2002


What is needed is a sensor along the tube, which tells where the tube is being pressed. In the human ear it is achieved by a simple mechanical array of hairs.

At some museums they use sand (or any other grain) to show the displacement along a tube. A simple photograph (or your eyes) show you what frequencies are being heard.

There are many different sensors out there now which could do the job, coupled with some material to give the desired effect (which the sensor will sense). That's the other half of the baking.

But I'm certain it's possible. (I work with this stuff every day)
-- pashute, Nov 02 2002


I remember this physics lesson vividly. It involved a glass tube with sand in it, called (Mr Croft enunciating carefully) "Kundt's Tube." It was agitated by a metronomic device which arrayed the sand in wavelengths, called a (ahem) "Vibrator."
-- General Washington, Nov 02 2002


Good idea, pashute. You are forgetting one thing, though - what is more important than the frequencies for understanding the signal is the *phase* of the frequencies. The magnitudes carry less information than the phase.

However, that's just being pedantic - the cochlear mic system can easily be designed to collect phase info as well. :-) Croissant for you!
-- cameron, Nov 03 2002


I wonder if you could somehow pass information from this microphone, using parallel signals (one from each "hair" or specific frequency), to a oppositely constructed speaker system. Basically one to one, to demonstrate the idea.

DSP: I seem to remember that the human ear cannot easily distinguish frequencies that are close to each other (except for beat effects), and part of the signal processing is to remove frequencies that are close to each other. That would mean that the "hairs" can be placed discrete distances apart and there would be no need for intermediate frequencies?
-- Ling, Apr 23 2010



random, halfbakery