Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Like gliding backwards through porridge.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                 

Chimera Processor

Multi-core CPU, with lots of "past"
  (+4, -6)
(+4, -6)
  [vote for,
against]

For a long time, the computer industry has enjoyed the benefits of something called "Moore's Law", which is actually just an observation and not a Law. The observation was that the total number of transistors in an integrated-circuit "chip" tended to double every 18 months or so. Everyone knows it can't last, but there is fun to be had in guessing when it will end, and in trying to keep it going, anyway.

In the meantime, for longer than the first couple of decades, the chip-making industry concentrated on making use of the fact that the smaller were the transistors, the faster they could be operated. From an initial "clock speed" of about 1 megahertz, after 11 generations of circuit-doubling they reached a "heat barrier" at about 2 gigahertz. All those very fast transistors were simply generating too much heat, operating that fast.

Clock speeds have crept upward slowly since; most modern CPUs (depending on cost and energy-efficiency rating) range from 1 to 4 gigahertz in clock speed. But Moore's Law is still allowing the manufacturers to pack still-more transistors into their chips! What are they doing with them these days?

The main thing they decided to do was put more than one CPU into the the chip-package. It's quite reasonable that if you can put 2 billion transistors in the same space where you used to put 1 billion, and if one CPU uses up 1 billion transistors, then why not have two? (As I write this, recently they've started making 4-CPU chips, and are already preparing to put 8 in a chip.)

However! Those are very modern CPUs being talked about there! They have 11 or 12 generations of "bells and whistles" added to them, since the first generation. A first-generation CPU (with respect to a wide market presence) had at most some tens of thousands of transistors. How many of THOSE could we fit inside one of today's chips? Hundreds of thousands! (Or merely tens of thousands, if each was accompanied by dedicated RAM, on the one big chip.)

But doing something like that isn't really the goal of this Idea. Sure, somebody in the multi-CPU supercomputer business might say, "WOW!", and start planning to do something with that variant of this Idea, but that's not what I want to see done.

This Idea was dreamed up as a response to two Problems, even though not everyone really thinks of these "problem" that way. The First Problem is called "backward compatibility". See, when they built the modern 64-bit CPU, they decided they didn't need to be backward-compatible any more with the original 8\16-bit CPUs. No mainstream programmer any longer writes code that requires such a processor. If you have such code sitting around, though, like a favorite DOS game, you can't directly run it on a modern computer. You either need an old computer, or you need an "emulator" (more on that below).

The Second Problem is called "Hands-on Education". In the early PC days, a lot of people became programmers by being able to mess around all they wanted with the computer software. Even the Operating System was often accessible and modifyable. Provided they were willing to learn Assembly Language. Even today it is widely recognized that best programmers got that way because they learned AL, yet no emphasis is put on learning it these days. Why bother, when today's optimizing compilers do such a good job? But there is a Point, which is that the mind-set associated with learning AL and working with limited computer memory force overall better habits with respect to programming. Sloppy code just does not FIT in limited space! This extremely valuable experience is basically no longer readily available to today's students.

I recognize that one workaround to that Second Problem is called "emulation". There's nothing wrong with using a program that emulates a limited-ability computer, and then requiring students to learn a few lessons in that environment. Emulation can also solve the problem of backward-compatibility, when you have an old game that can't be directly run on a modern CPU --the game doesn't know its running inside a "pretend" computer. On the other hand, emulators are not easy to write, and some games sometimes use oddball "officially unsupported" hardware behavior to do some special trick or other. Making an emulator THAT good runs up against the Law of Diminishing Returns, so nobody bothers. And the problem is even worse if your favorite old computer wasn't a DOS machine!

So NOW it is time to introduce the Chimera Processor! It has the very latest processor in it, maybe two, but it also has a LOT of older processors in it. It has an 80486, sufficiently backward compatible for everything in the DOS world. It has a 68000 in it, for compatibility with original Macintosh and Amiga software. It has a 6510 and other hardware, completely cloning the classic Commodore 64. It has a 6502 for Atari and original Apple and VIC-20 software, a Z80 for Radio Shack software, a 9900 for Texas Instruments software ("994A"), a 6809 for "Color Computer" software, and so on. There are also CPUs supporting Intellivision and Nintendo and Sega software. And so on, until nothing relevant is left out.

This can be done today because the patents on all those old processors have expired! (I didn't mention the original PlayStation because I think it's newer and still under patent protection, although it might be licensed.)

Now instead of an emulator, the Operating System simply needs to be able to isolate the chosen processor (easy enough if the hardware has been designed for it), so that no matter what the user does (especially a student learning!), the rest of the system will be unaffected. Most of the CPUs can have on-chip dedicated RAM (the 80486 and 68000 being the biggest and perhaps only exceptions), which means it's physically impossible for them to affect the main large CPUs, if they crash. After a few more cycles of Moore's Law, even the bigger "small" parts of the Chimera Processor can be entirely contained on-chip (that is, if some software for the 68000 likes 512MB of RAM, that much RAM will be able to fit on the one chip, along with everything else mentioned above).

Vernon, Sep 14 2008

[link]






       Your premise is wrong. The same code I write today will run just as well on a 32-bit system as it does on a 64-bit system, unless the program is written directly in whatever assembly code the processor uses.   

       The DOS program that won't run on your modern computer doesn't have a problem with the CPU, it has a problem with missing audio and video drivers. If the DOS program was written to clock with the CPU clock, it may run unusably faster as well but that can be addressed as well.   

       In your scenario, you'd need an OS that ran on each processor (simultaneously) and knew which processor to run which program on!
phoenix, Sep 14 2008
  

       [phoenix], ok, I wasn't fully informed. The full 64-bit mode does not have full backward compatibility, and I didn't know there was a 32-bit compatibility mode. So, obviously, an 80486 isn't needed. The others, though....   

       I also made an error regarding the 68000 and 512MB; that processor only has a 24-bit address bus (but 32-bit data registers), and can only directly address 16MB. A later version of that processor family, though, the 68020, is fully 32 bits, and some Mac models used it.
Vernon, Sep 15 2008
  

       Use an FPGA, and you could have any processor you liked in the time it takes to reprogram the array.
Some processors like 6502s are public domain for some devices, I believe.
coprocephalous, Sep 15 2008
  

       Something sort of like this has been happening since the dawn of the integrated circuit age, with the gradual absorption onto the die of more and more components that were once external - from floating point arithmetic units, to cache, to (in the Microchip dsPIC for example) DSP.   

       My understanding though is that what moves onto the die generally does so for sound architectural reasons - cache needs to be close to the CPU for example.   

       That's not to say that I dinnae like the cut o' yer jib, laddie. Have a dry salted croissant on me, ye scurvy dog. Aarr.   

       (It's just gone midnight)
BunsenHoneydew, Sep 18 2008
  

       This actually was baked forever ago. Many older Macintosh computers, for instance, had a dedicated 486 processor inside so you could run DOS and Windows on them. Today, however, emulation works so fast that something like this is useless. Even though the PowerPC processor is much more complex than an X86 CPU, you can easily run PPC Mac apps on an Intel Mac at a great speed using Rosetta.   

       You can emulate recent game systems at nice speeds, too. What would you gain from having the actual chips aside from a higher cost for something that nobody really needs?
cybrian, Jan 29 2010
  

       [cybrian], the answer is, "perfect hardware compatibility". Emulation is only as good as the software that does the emulating. Hardware designs that are proven and fixed, however, merely need to be copied (and shrunk to modern line-widths).
Vernon, Jan 29 2010
  

       perfect emulation is very possible.
WcW, Jan 30 2010
  

       Nostalgia would’ve been a more plausible answer.
Ian Tindale, Jan 30 2010
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle