Please log in.
Before you can vote, you need to register. Please log in or create an account.
Computer: CPU
Hardware Huffman Decoding   (+1, -4)  [vote for, against]
Full-time utilization of compression at the assembler level

What I know about Huffman compression algorithms you could fit in the palm of your hand, but it seems to me that there's an incredible amound of wasted processer time involved in decompressing a program in order to run them. If there is indeed a one-to-one relationship between a compressed executible file and its decompressed form, wouldn't it be possible to design a chip that could understand the compressed format and translate it directly into assembler / machine language? Or is it the case that the algorithm is too 'fluid' in function and dependent upon the specifics of the file to be hardcoded? If all of the information is essentially there in some form, Isn't essentially the inflated form just waste, albeit nice waste that allows the program to be more comprehensible?
-- RayfordSteele, Jun 09 2002

Practical Huffman coding http://www.compressconsult.com/huffman/
[phoenix, Jun 09 2002, last modified Oct 21 2004]

Huffman Coding http://datacompression.info/Huffman.shtml
[phoenix, Jun 09 2002, last modified Oct 21 2004]

Huffman Encoding http://ciips.ee.uwa...LDS210/huffman.html
"What's the smallest number of bits (hence the minimum size of file) we can use to store an arbitrary piece of text? " [phoenix, Jun 09 2002, last modified Oct 21 2004]

Compression Codec (Still, it's software) http://www.eggheadc...ticles/20010604.asp
I was looking for anything in asp.net that was headed in the direction you suggest -- and found instead this interesting article that by itself echos some concerns with compression that you've stated. [reensure, Jun 09 2002]

CodePack http://www-3.ibm.co...f/products/CodePack
Baked by IBM. [JKew, Jun 09 2002, last modified Oct 21 2004]

Just wait for Rods to get here. Shouldn't be long now.
-- [ sctld ], Jun 09 2002


Compressing all your data, are you? Or are you referring to interpreted code? Conversion into machine language takes place at compile time. The optimization of the code depends on the programmer and the compiler.

Beyond that, I haven't the foggiest idea what you're suggesting so if I'm off-base, somebody set me straight.
-- phoenix, Jun 09 2002


He just wants huffman decoding to occur through a microchip, as opposed to through a loaded software algorithm, that takes up processor time.
-- [ sctld ], Jun 09 2002


Yes, but he specifically refers to programs and tranlating them into machine code. I'm trying to figure out why.

See my profile for my opinion on intelligence (capability) moving outward (being decentralized).
-- phoenix, Jun 09 2002


Arguably, a well-designed CISC machine is already performing a kind of fixed-table Huffman decoding; rarely-used opcodes are longer. The microcode's job is in part to perform the decompression. (I don't know if there are any well-designed CISCs (in this sense) on the market: the i86, saddled with decades of kludgy addons, isn't it. Maybe the ARM Thumb?) One of the disadvantages of the fixed-width-opcodes typical of RISC designs is they require more memory, and so more memory bandwidth, and memory bandwidth is often in short supply. Dynamic Huffman decoding would be really difficult because you wouldn't know what probabilities to use to continue after a branch.
-- wiml, Jun 09 2002


wiml: I was thinking of CISC machines too. Actually, for some applications it may be useful to include some hardware to assist in packing and unpacking bit-squashed data. I know the 8x386 added some instructions for bit-field insertion and extraction, but on that particular processor the instructions were too slow to be useful. I don't know whether later processors better implement them.
-- supercat, Jun 09 2002


If you're talking about compression of instructions, I agree that CISC (Complex instruction set computers) chips were designed for this. In the old days, memory was very expensive, so CPUs (such as the 8086) had single opcodes that could perform quite complex operations (there's an 8086 instruction for copying a string from one area of memory to another, for example). However, since the average PC has more memory than you could ever fill with program code, this approach has little advantage for home computers. The ARM Thumb instruction set introduces shorter instructions for commonly-used operations, which is a very limited version of this approach, for use in embedded systems.

However, instructions are not the most compressible part of software: things like graphics and data tables are far more amenable to compression, with more repetition, and also tend to take up more of an executable. Hardware *data* compression can be found in some CPUs intended for special markets: a DSP for audio processing such as the AD SHARC has hardware mu-law and A-law (lossy) compression.

I know someone who wrote a Huffman decoder in hardware for a video compression IC; it's certainly possible to compress and decompress bit streams in hardware, but this tends to be done in input/output devices rather than in CPUs. Putting Huffman in CPUs would increase CPU chip size (potentially a lot), and power consumption since the code would be called every time the program was run rather than when it was first downloaded and decompressed. In embedded solutions, the overheads of Huffman decoding would exceed the savings from having to read fewer instructions into the CPU.
-- pottedstu, Jun 09 2002


IBM has sort-of baked this for PowerPC cores: see link to CodePack documentation.

Essentially, it adds a decompressor on-chip between the CPU core and the memory controller, which uncompresses streams of compressed opcodes and feeds them to the CPU.

Claims to get 35-40% compression.
-- JKew, Jun 09 2002


// it seems to me that there's an incredible amound of wasted processer time involved in decompressing a program in order to run them //

Not sure I agree, for desktop computers at least -- is any software stored compressed anymore now that disk is so capacious and cheap?

Decompression can be very fast, in any case, given a speedy CPU: what you pay in decompression time you might well save in reduced disk traffic...
-- JKew, Jun 09 2002


I still use Zip files quite often, if for little else than long term storage and archiving. But that's not my point here. I was searching for a way to improve processing speed in general, and the thought was that if a processor had to deal with fewer bits by some standard compression method, then processing speeds could be increased. Think about general information theory here, outside of the realm of programming for a minute. All of the necessary information is there, in some compressed format, that albeit is currently unintelligible to the processor, which must use an decompression algorithm of some sort before it can work with the file. If the processor spoke 'Huffmanese,' then it could deal with fewer bytes on the input side and possibly decrease the size of the chip necessary or improve performance or both. Reduced bus traffic as well. I'm not sure I would do this as an add-on, I would redesign the chip to spit out the decompressed result, given compressed information, simply by the instruction set. Although I suppose to maintain compatibility it would have to be.

I suppose, essentially, it's CISC advanced a step higher.
-- RayfordSteele, Jun 10 2002


Interestingly enough, I'm working a compression format that could do just such a thing. Designed to allow streaming decompression and all that nice stuff. Although, this would probably go better built into the harddrives.
-- ironfroggy, Jul 21 2002


I helped doing some of the Huffman coding for the early freeware JPEG stuff. Instead of doing the Huffmann coding bit-by-bit, try processing a load of bits in parallel using a look-up table. An 8- to 12-bit table will probably cover most of your more common cases in one hit. The table will return a code number, which will in turn tell you how many bits to shift.

If you want hardware, use an FPGA.
-- Richard K, Dec 05 2002



random, halfbakery