I can't do math. Can't do it at all. You know what would be really nice? If I could have the missing "math" chunk of my brain replaced by an implanted coprocessor completely dedicated to mathematical operations, that would be nice. I said it requires magic because I don't know enough about the
brain to say how this would actually work - when the brain attempted to do math the signals would have to be rerouted to the processor to be calculated then sent back to the brain.

It's not an internal calculator per say, the human still has to remember how to do equations, it would just speed up the operations.

Countinghttp://www.sciencem...otics/smart/115.asp Test your math implant! [Jinbish, Dec 15 2010]

The story of Mathematicshttp://www.open2.net/storyofmaths/ Website related to BBC series looking at the history of maths. I don't think it mentioned the Romans, but there were many other cultures studied. Presented by Prof. De Sautoy. [Jinbish, Dec 16 2010]

Another examplehttp://www.youtube....JtI&feature=related finding cube roots of perfect cubes - who'd of thought it could be so simple! [NotationToby, Dec 22 2010]

//the missing "math" chunk of my brain// That chunk is
probably repurposed. Is there anything you're particularly
good at? Interested in trading that for improved math skills?

Given that partial sight had been restored to blind people by implanting a grid of electrodes in contact with the visual cortex, and that chipmanzees (< mistype stays) have been able to control a computer game via implanted electrodes, I suspect that this is doable, and not actually terribly difficult.

As [mouseposture] implies, I wouldn't want part of my brain replaced; better to add a co-processor to a basically intact brain.

It would probably take some time to learn to interface with the new hardware, but eventually you should be able to think operations and have the answer pop up.

I like the idea of implanting electrodes into babies' brains, and interfacing them with computers, mobile devices etc. As they grow up, communing with the internet, computing resources, and each other would be as easy as thought, and something like telepathy and a hive mind would evolve. Ethics schmethics.

[8th] But you're a giant evil hegemony, crushing and absorbing all in your path, with your advanced but remorseless technology... Wait, where do I sign?

Hey, I'm not disparaging cybernetics here, [8th], it's just that the organically grown neurons are really excellent at what they do. Training them to work as required is a better solution than replacing them with some clunky semiconductor.

So, unless you're Noonian Soong, it's all about the neurons!

I'm with spidermother - doesn't seem like it would need magic to me.

This sort of brain enhancement has been covered extensively in science fiction.
I don't know of it being done for maths; sight and hearing both have been in at least nascent form. The senses are possibly an easier thing to do since the information is all output (from the point of view of the implant).

I can imagine that it might be possible to train the brain to communicate with the device over time - perhaps one would have to start with a very basic processor and work up. For example, numbers might have to be shifted into the device one digit at a time. Returning the result of various operations (addition, subtraction, multiplication, division) could be done effectively simultaneously, and I expect would be relatively easy both technologically and to learn. A full general-purpose (ie. Turning complete) processor would be much better, but probably harder to learn to interface to.

This is not magic. Blind people have had brain implants in their visual cortices which produce grids of visual stimulation and it's possible to read brain activity to the extent that one can work out if someone's thinking about tennis or not. Taking the two of these together, the answer is to come up with a notation in which thinking about tennis in a particular rhythm represents bits: thinking about tennis equals one, not thinking about tennis equals zero. Then you learn ASCII in binary. The input device registers what you're thinking about and forms a kind of binary machine code, including arithmetic and logic operators and delimiters for start and end of sequents. The answer is then displayed as a series of dots in the visual field. However, you would be blind and it might be easier to do the maths than learn how to use the device.

//but arithmetic is such an imaginary thing... and it's not a survival trait in any sense//

I would disagree, the ability to track and understand numbers is definitely a survival trait, and it has been since human interaction became more important than self reliance (probably some time around the agricultural revolution). This is still, admittedly, a vanishingly short time in evolutionary terms.

First of all, as Jinbish says, you already have one of the most advanced, super-fast, massively parallel processing devices in your head already, for free.

Cutting a bit out and replacing it with something that can only work at a snail's pace in comparison is not a good way to learn mathematics.

The magical part here, which I think is what nineteenthly has tried to address is the interface by which you get to mentally encode the information you want mathified, and then to mentally decode the myrad images of tennis players back into a meaningful result. But in order to do that, you'd need to learn maths.

In order to 'do' maths (assuming you are talking about applying mathematical techniques to a real situation is what you mean by "math" and not the 99% of maths that is often neglected by everyone else) you need to do 3 distinct things:

1) You take a real-world situation, define a question you want answering in strict enough terms for it to be encodified and collect all the information necessary to calculate the answer. (Ask a question)

2) You then apply various rules and translations on your data to come up with a result, (Calculate an Answer)

3) which then needs to be de-datafied back into a real-world answer. (Understand what the answer means)

If you're no good at part 1, you wont be able to do part 2 effectively, and if you're no good at part 3, then you may as well not have bothered in the first place.

So, assuming we are talking about an implant that does this 3-stage process - how do we go about stage 1? By what process do we shape the problem being addressed, define its important elements, and present them to the next step in a way that allows an answer to be generated - all in your mind?

The only way I can see for this to be achieved would be to hire someone who is good at maths to follow you around and perform the required calculations at your request.

The interface is well established and user friendly (unless you get a grouchy mathematician), the software is continuously updated and the power supply is rechargeable from a variety of sources.

//First of all, as Jinbish says, you already have one of the most advanced, super-fast, massively parallel processing devices in your head already, for free.//

That may be true, but can it do basic maths on several-digit numbers in the blink of an eye? Can it bollocks.
I *can* multiply two 2+digit numbers together, but need a paper and pen. With much rote learning I memorised a 12x12 multiplication table - but there are still areas which suffer from pattern interference. Division is the same - possible with an external memory, but slow.
Face it - there are some things brains are good at, and there are some things computers are good at - but they're not the same.

So I think that provided one knew how to do basic maths, a co-processor-essentially a calculator would be really useful. A simple internal computer which you could program on the fly - well that would mean that amazing things could be done.

I'm not even sure SF has done that possibility justice, come to think of it. - In the stories I've read, characters are effectively described as using their senses to navigate a network/virtual space, rather than augment their intelligence.

As far as I can tell, once we have neural interfaces of a sufficiently high fidelity (interface with thousands to tens-of-thousands of neurons), performing fast mathematical computations is going to be the simplest and easiest of tasks. More interesting is going to be your Google implant. With neural network simulations, you'll be able to augment your intelligence far beyond what most people consider.

Brain implant to process numbers????

Consider this: You are wondering why your car/jet-pack/whatever is mis-behaving, or how it works or whatever. Think on it for a few moments, then the answers will start to flow naturally from this interface as though you are remembering something you used to know.

There are going to be some fantastic advances in neural prostheses within a decade.

The problem with this is that numbers are not the same as the notation used for them. If we were generally accustomed to using Roman numerals rather than Arabic ones we'd be struggling with how to perform operations on them. On the other hand, if we simply visualised certain mathematical and arithmetical processes, we'd be able to perform a lot of operations, including integer division, modulo, multiplication, squares, cubes and integer square and cube roots, addition and subtraction of positive integers, all up to a thousand or more, logarithms, calculus and trigonometrical functions intuitively, and probably other things. The trouble is that we get bogged down with the figures and how to manipulate them. They are significant as a sort of useful fiction which reveals certain truths and properties of numbers but there's another, i'm tempted to say deeper, reality pertaining to number and what can be done with it. However, there are limits to that too. All of those faculties are already there in our minds.

I was expecting a large group of persons trained to rapidly perform small portions of large complex problems in their heads, then pool their answers with other persons who would mentally sum up the parts and present the solution.

Applied to real brains this is going to be a bit lame and probably not worth it but not impossible. Imagine doing this with an AI though. Same neural/digital meshing required except this time you could splic millions of neurons to create a 100% accurate 'Notepad' memory, or extensive maths library.

I think precise storage would be the most valuable tool as for our own purposes being able to track down an old source is invaluable as the patterns contained within it can be still recollected if not the detail i.e. the exact storage would be encoded and recalled by the 'user' in their own way like someone quickly refreshing their memory from a known book or crib sheet.

//Roman numerals// could work in an arithmetic environment given a bit of work: probably easier than Arabic numerals though more wordy ... you'd want to notate decimal fractions in reverse of course.

//choir for solving number problems//
"okay can anybody here read arabic numerals.... anybody.... oh, come on..."

Roman numerals can be made to work and there are techniques for dealing with the likes of multiplication and division but in situations where they were the only numerical form of notation, the abacus, a place-value system, was used instead of bothering with them. My point is that Arabic numerals or any other system are only a representation of number, not number itself. Nothing magical happens when there are ten items in front of you instead of nine which necessitates the use of another symbol.

We're familiar with place value based notation using interchangeable symbols. Number bases don't generally alter the underlying situation. What we learn is not to get to grips with mentally placing two groups of objects together in our mind's eyes and apprehending the number immediately, but a series of symbol manipulation rules which are mathematically significant but not the same as actually doing arithmetic with raw quantities.

The resources exist within our own minds but are unavailable to us under everyday circumstances without practice.

Yikes. I'm about to take a few prerequisite math and science 12 courses after twenty some odd years out of school and could use one of these coprocessors I think. <whispers> it'sscary

Anything I could visualise; trigonometry, geometry and physics I was a whiz at but all the rest of mathematics just seems so bloody abstract to me. <readjusts ammo belt, checks straps> "Cover me, I'm goin in."

//Agree with [Loris]// //various operations (addition, subtraction, multiplication, division) could be done effectively simultaneously//

I think it's about requirements. To what level of accuracy and what kind of speed do we need an implant for? For that matter, what 'math' do we need a co-processor to do? If all we're talking about is addition, subtraction, multiplication, and division, then how accurate do we need to be?

So I have to decide whether a 4 weekly rail season ticket is cheaper, or a monthly travelcard... That's not too tricky to do to the nearest dollar or pound - and if the difference comes down to pennies & cents, is it a significant factor? Maybe I have a tax return form to do and there is lots of addition and then some %ages to do. An implanted chip might help here, I guess... but then, the real trick of the form is knowing what to put in which column/box...

Then there are more complicated functions like calculus (differentiation & integration and so on) - that's a point, I *hate* integration - but that's because I don't remember all the various rules... it's not the actual computation.

Matrices - now we're talking. That's some computation right there... except we don't often have a conscious need to compute a matrix. If there are complex functions - such as understanding the rate of change of height with vertical distance travelled, our brains often tacitly compute this.

And as for the mashed potato, you don't weigh it out. You iterate through with a series of "splodges". The chances are that you're weighing/portioning will have a greater error in it than any maths you do in your head. Besides, anything left over after 2 runs of splodges is used with the trebuchet that you have just built with wooden spoons... "Have at you!"

{Guest1 implicitly calculates trajectory of mashed potato and takes evasive action. Potato hits wall with a "splat!"}

// //Roman numerals// could work in an arithmetic environment given a bit of work: probably easier than Arabic numerals though more wordy ... you'd want to notate decimal fractions in reverse of course.//

Proper roman numerals go i, ii, iii, iiii, v, vi, vii, viii, viiii, x, etc. (Sometimes j was used to break up the runs of i). The use of iv, ix, etc is a modern convention; strictly, roman numerals are non-positional. They're also not really decimal, so the use of a decimal mantissa would not sit well with me. I would prefer to do as the Romans did, and use ordinary fractions.

Aside: how do you write the square root of -i in roman numerals?

I assume the working ledgers had separate columns for each power-of-10 grouping, each column being 5 characters wide, subdivided into 1,4 or 2,3 subcolumns. Even a long list being added or subtracted together is easily worked at systematically with a pencil and paper and you never have to run out of fingers. Doubtless this is how it was actually done. The finished sum of course would be written on the bill-of-sale or whatever without spacing.

I'll get back to you if I figure out multiplication & division... meanwhile, even though the Romans used 12ths fractions, you could do decimals by reversing the order (ie .VIIILDCCMM would be .8572) and not have to worry about occasions when you wanted more or less detail.

//you could do decimals by reversing the order (ie .VIIILDCCMM would be .8572)//

Sure, but you'd have to read that as MM/(X) + DCC/M + L/C + VIII/X. That is (to me) an unaesthetic kludging of two not-very-compatible systems, and imposes a parsing algorithm which is not implied by roman numerals themselves, but rather imported to enforce an artificial compatibility with base 10 numbers.

And it breaks down when you want a zero in your decimal fraction (without still more kludging).

Roman numerals can be dealt with directly but often weren't. The Roman abacus is not that similar to even the original numerals because it can have empty columns, i.e. zero, and Latin words for numerals are not generally connected to the notation (unlike Etruscan, oddly, which however used a different system but whose words used the subtractive principle found in later Roman numerals). So they used something else themselves. Roman numerals are mainly pirctures of numbers rather than something used for arithmetic, on the whole. Arabic numerals are directly connected to arithmetic, though being decimal, still a fairly poor representation.

The Romans had a concept of negative numbers as it was possible to be in debt (in a sense, their concept of negative numbers was slavery). They also had a notation for fractions, but their units suggest they tried to avoid fractions by setting currency, weights and so forth at small enough values to eschew their use. Looking at it that way, what they did after the decimal point was to stop using decimal.

//At what point beyond wetwiring do we start to resemble the Matrix flicks?//

Well first score lots of dynamic action and quick thinking, second score is a bit more plodding with a mediocre storyline, and in the last score you lose the plot completely. That what you mean ?

Why does everyone care so much about the Romans - what have they ever done for us?

(me)//various operations (addition, subtraction, multiplication, division) could be done effectively simultaneously//

(jinbish)//I think it's about requirements. To what level of accuracy and what kind of speed do we need an implant for? For that matter, what 'math' do we need a co-processor to do? If all we're talking about is addition, subtraction, multiplication, and division, then how accurate do we need to be?//

What is most important is probably that the interface is optimised for the brain, rather than itself.

In my quote above I was imagining a 'simple' calculator - what would be the easiest way to ensure that the brain learnt what the data meant? I have the impression that this is easier where the system is always-on.

I propose that above a certain amount of complexity, the output could easily be configurable.
A conceptually simple return pathway at this level is to write to one of the senses. For example, hook in to the vision system somehow. We know the brain is flexible enough to cope with altered data - people have become used to seeing things upside down, for example. So - just give people a HUD and it'll just work out.

However, the input (to the co-processor) would probably have to be very well targeted to optimise as few channels as possible. I've already suggested some sort of shift register. With 4 channels one could have 16 states, if the brain can deal with chording inputs. A fifth would then be needed to indicate input-ready.
So this could represent 0..9, end-of-number, remove-last-input, decimal point, sign and have a few left over.
So in the simplest useful system, one could push in several digits for one number, then again for a second, and know what a variety of operations were. It would be nice to be able to then copy one of those results directly.

The Romans are not on the table. Their own way of writing numbers fails to represent them accurately. So does ours, though differently. Both of them, though, also provide a structure which illustrates certain mathematical properties of numbers and has mathematical properties of its own.

The main thing, though, is that we can just do maths anyway. We just need to get in touch with our ability to do it rather than having a calculator implanted. If we had something like that, it should me more than a calculator. It could at least be a multimedia player, computer and telepathy device as well.

//It could at least be a multimedia player, computer and telepathy device as well.//

Good ones.

I've also rationalised what you were going on about earlier. I think the word is 'subconcious'. A calculator could never be used in a precise fashion because the result would have no justification based on the 'steps and rules' carried out to feed the unit. Results would appear as if from the subconcious, therefore always a bit untrustworthy. I have a really good gut feeling about that.

All integers are accurately represented in the decimal representation. All other numbers can be represented to arbitrary precision, so I fail to see the problem.
That's not to say that the output would necessarily be of that form of course.

Some people have developed the ability to do some basic operations reasonably quickly. But:
1) Not as fast as even simple processors from a decade ago
2) Not all operations, generally not division
3) It takes considerable time and effort to learn, as evinced by the small proportion of the population capable of demonstrating it.
4) Not as accurately - I saw a demonstration on TV where the guy failed to carry a 2.

And all this covers is the very simplest 'calculator' set-up. Even this proof of concept could still be very well worth it.
With the lessons learned about interfacing with the brain, the next stage would put an accessible processor in there with a bit of storage, even just a few dozen characters of output and I'm sure people would soon be doing amazing things.

If you have a multimedia player implant and in particular a telepathy one (and incidentally i think it should also be a universal remote control), obsolescence raises its ugly head. Calculators with which one interacts telepathically are less important, except that the point might come when in order to interact meaningfully with others, hold down a job, get into university and so forth, one might need an up to date implant. So it's semi-voluntary brain surgery time. Even the way it's plugged into your brain would change in the end.

Subconscious yes, so interaction with the senses directly would be an improvement. You hear a numbers station as an auditory hallucination and you can't get the melody for Lincolnshire Poacher out of your head.

If the possibility exists that something better will come along and it becomes standard practice to have an implant which effectively enhances one's cognition, a major problem of social exclusion could emerge. Who'd employ, for instance, an accountant who couldn't glance at a balance sheet for a split second and tell you if it made sense?

Social exclusion and prejudice already exist in many forms, I don't think this would change that... Except that innumeracy rates would presumably go down.

//Who'd employ, for instance, an accountant who couldn't glance at a balance sheet for a split second and tell you if it made sense?//

Hasn't the development of spreadsheets has basically made such work obsolete in any case?

On the down-side, maybe those with implants wouldn't be able to go near strong magnets eg. to have an MRI scan.

It's a hypothetical example, though the ability to do this might bring such a skill back. What i mean is, a job which requires rapid or high volume calculation, or more importantly, ordinary social interaction would have raised the bar, not because an implant was prohibitively expensive but because it would become obsolete.

This is more immediate than most other kinds of social exclusion. It's more like someone not being able to speak the native language than not being able to drive.

Concerning magnets, we haven't specified what the implant is made of. It could be neuronic for all we know.

It is a good question, but I don't think it would be such an issue in practice.

People do already become unemployable in their field for a variety of reasons. But somehow it doesn't seem to be as severe an issue as one might expect - people are adaptable, and experience goes a long way. The kids with V2 devices may potentially be better, but the oldies with V1 devices would really know how to use them.
Also, as they get older, the work is often do less 'doing' and more 'managing'.

For another thing, going beyond a certain level of computation must open some significant security vulnerabilities and other risks, even if it's not 'networked'.
I wouldn't want a brain implant which could run more than one program at once, or had lots of storage. I'd stick to something I could reasonably reset to factory-default without losing lots of stuff.
If you have lots of programs or data stored, a mistake or attack might mean losing the lot. - Suppose you were persuaded to enter a program which output epilepsy-inducing strobing. You'd want to shut that off and delete it straight away.
If lots of programs could run in the background, someone might find a way to hack your brain, like a keylogger.
Too much storage space, and people would just fill it up with cruft, or stuff they hadn't recorded elsewhere.

Flexible silicon substrate with many silicon "pins"
(conductive points) placed straight on the gray
matter. Wired to an optical com link so no electricity
can bridge to the brain. A human brain is easily
adaptable enough to use it.

Yes, as i was typing that i thought it was similar to knowledge becoming outdated.

The thing is, i like your thinking with the simplicity but i think the utility of a relatively feature-free device wouldn't be that much more than what could be achieved through a form of savantry training, which might make it pointless.

[Loris] the problem with brain implants and MRI scans turns
out not to be the magnetic field (not much, anyway), but
rather the RF coil. The implant acts as an antenna,
AC current
flows through a circuit, the thing heats up, and cooks the
brain.
Fortunately,
the pulse sequence can be tuned to not deposit too much
energy in the tissue, so, within a very carefully calculated
envelope, people with brain implants can have MRIs. If
you
get it wrong, though, the patient comes out of the
scanner a
vegetable.

Well obviously this would have to be open source otherwise it'd make us all stupid and mad. Concerning the brain roasting problem though, i think this is an argument for literally making it out of a neural net. There are animals which transmit and receive information electrically.

I think what's needed is some kind of benign teratoma made from the host's tissue, living somewhere other than in the brain but communicating with it like an electric catfish would.

There's the so-called "Vedic mathematics", but that only works for a few carefully chosen examples. I'll link to it on YouTube but it's not as useful as it seems, except for overcoming hangups about arithmetic.

I find counting in duodecimal on my fingers very helpful. There's a technique involving phalanges whereby one can count to a gross on one's fingers and sort of use them as an abacus.

No worries, there are various different techniques, but i think all of them only work more quickly or easily for relatively few examples. I can't speak for the others but i remember that the multiplication algorithm is best for decimally-expressed numbers with low digits, like twenty-one or thirty-two. Once you get higher, the overheads of drawing the lines and so forth make it too much hassle.

(Note to self: save it for the blog).

Something else which might have some mileage is to use the phalangeal counting system in duodecimal but apply abacus techniques to it, then to "visualise" your hands doing it proprioceptively rather than physically doing it.