h a l f b a k e r yRomantic, but doomed to fail.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
I have no idea if address buses are still used in the conventional sense and i
could be describing something as old hat as virtual memory or an MMU.
Storage hardware does not always place firm limits on storage capacity,
though other factors present problems. Having said that, some cassettes are
longer than others, so there can
be
larger media than those for which a system is designed.
Large integers can be represented by a value indicating word length.
For instance, a sixteen-bit unsigned value could indicate the number of bits in
the
address rather than a thirty-two bit unsigned value simply referring to a single
address in the memory space.
So, instead of having a specific address sent through the
address
bus to the physical memory, a string of bits of specified length is sent in series
(or several bits in parallel)
and the relevant location in memory is accessed.
Clearly this
could prove rather slow, so i also propose that as well as doing single fetches,
the
memory can send a sequence of bytes from consecutive addresses either
incrementing or decrementing depending on other signals. The machine code
of
the CPU is modified to take these into consideration.
SPI flash memory...
http://www.atmel.co...cuments/doc5107.pdf ...works sort of like this. [Wrongfellow, Sep 30 2010]
Bit sliced architecture
http://en.wikipedia.org/wiki/Bit_slicing [Dub, Sep 30 2010]
[link]
|
|
It could well be, i don't know yet. Real life intervenes at this point and i'll get
back to you. |
|
|
Actually, no i don't think it is. I think that would involve the switching of entire
banks of memory in and out of an address space. |
|
|
This reminds me of the SPI flash memory interface. The SPI READ instruction is basically the same as your "sequence of bytes" idea. |
|
|
" The READ instruction sequence reads the memory array [...] |
|
|
[T]he READ instruction is clocked in on the SI line, followed by the byte address to be read. [...] The data (D7-D0) at the specified address is then shifted out [...] If only one byte is to be read, the CS line should be driven high after the least significant data bit. To continue read operation and sequentially read subsequent byte addresses from the device by simply keeping CS low and provide a clock signal. The device incorporates an internal address counter that automatically increments to the next byte address during sequential read operation. The READ instruction can be continued since the byte address is automatically incremented and data will continue to be shifted out of the AT25FS040 [...] " |
|
|
Thanks. I had a feeling it would be out there somewhere. RAM would be faster though, wouldn't it? Also, once it's done, later technology would be more compatible than it currently is. |
|
|
Bit of an unfair critisism this (it's a good one that can be applied to any computational idea) but don't Turing (i.e. Universal) Machines already do this?! baked etc. (Ha ha - Turing Machine gags) |
|
|
I've seen self-describing stuff like this in graphical data-file formats - they sort of lay out early on how many bytes the rest of the file is supposed to occupy, allowing the system to quickly define which bits correspond to which functions. I don't know whether there are any limits implicitly imposed by these protocols, but concievably, there would be an image file that's too large to be represented by such a protocol. |
|
|
So you've already suggested a sixteen-bit unsigned value be used to define the number of bits in the address, but what if your address range goes beyond this size? I suppose you could chain blocks of memory together, and maybe that's a better method in terms of scalability - and might allow for parrallel processing as well - so a block of 1024 bytes could have a start, and an end address - the end address describing the start (relatively speaking) of the next block, or, if null, suggesting the end of the read operation. That way, you could (relatively) quickly scan through the blocks to find the end-point to determine the overall size, before parcelling out the actual data-read operations to different processors. |
|
|
Sixteen bits is around sixty-five thousand. Two to the power of ((two to the power of sixteen) minus one) in decimal is an integer with getting on for twenty thousand places. I find it hard to imagine why an address space of that size would be necessary unless i'm missing something crucial about the design of computers, which is easily possible. What am i missing exactly (not being sarcastic)? |
|
|
Now you come to mention it, it does sound a bit like a Turing Machine, but it isn't done this way with IC-based memory so far as i know. |
|
|
//an address space of that size// it depends on what you're modelling - as computation gets faster, and memory becomes more available, problems in the NP space become implementable where they may have been impractical before - but you're right, that is a lot of bits. Having said that, as abstraction layers gain altitude, it's not uncommon to store information in increasingly redundantly sized datastructures (I wonder if there's a counter to Moore's Law that states something along the lines that the average size of a computer file doubles every 5 years?) |
|
|
So, people may have thought that an 8-bit memory addressing system was plenty 30 years ago - who knows what's likely to be normal in another 30 years? |
|
|
Well, it seems that the problem with an address space that large is that it dwarfs the number of elementary particles in the observable Universe, even if dark matter WIMPs are included, and i have a hunch it would even dwarf the factorial of that number. Under Moore's Law, assuming doubling every year and a current address space of four gigabytes (?), that would give us over fifty millenia. This is not the kind of integer normally found outside pure mathematics. |
|
|
Erm, Isn't this just Bit-sliced architecture? {Linky}
6502 (and many derivatives) could effectively handle any size integers |
|
|
No, it's about address space not data. The large integer to which i'm referring is the address of the word, not the word itself. So far as i know, the 6502 has an address space of sixty-four kilobytes unless it was paged. |
|
|
Ah, I misunderstood - (I guess the clue's in the title... and the description!) - The address space is infinite.
What would happen to address out of bounds exceptions? |
|
|
//Wot 2^524,288 not big enough for you?! |
|
|
Not infinite but very large. Infinite could be provided with something like a separate line for initiating the start and end of the address, ignoring wear and tear, but this doesn't do that because i think its larger than for any practical use. |
|
|
Out of bounds? The next sixteen bits after the end of the stream would be interpreted as referring to the next address, so it'd be a software bug. |
|
|
Ah. I had optimistically assumed that this idea was to
provide a sensible amount of space on forms which require
one to insert one's postal address. |
|
|
The largest ram array I have seen is 2 TB. How much physical memory you can put in a machine is a how much money you got type question. Need high performance computing and can afford it then you can start changing metric prefixes.
Giga Tera Peta Exa Zetta Yotta etc
Part of what you are describing though sounds like pretty standard digital logic tricks to improve performance.
Some logic families clock an output data bit on a rising clock edge, while some on a falling clock edge. Combine the two and you get DDR or double data rate memory. Store even memory addresses on one chip and odd addresses on another. Start a memory read or write cycle on both simultaneously. Then you have 2 memory fetches at the same time because you "interleaved" memory access.
The only downside is you have to have more physical sticks of identical ram on a board to interleave addresses.
If the physical memory array is larger than the cpu address space, then the operating system starts translating physical memory addresses into logical memory addresses. This kind of thing has been going on for years as operating systems use files stored on hard drives to represent virtual memory. |
|
| |