Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Faster than a stationary bullet.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                   

Please log in.
Before you can vote, you need to register. Please log in or create an account.

Network Based Virtual Memory

Swap your swap
  (+2, -1)
(+2, -1)
  [vote for,
against]

Consider an enterprise level computer network, with dozens, hundreds, or perhaps even thousands of computers networked together. Each of these computers has a given amount of RAM, and when that memory is full it turns to the rather slow virtual memory system to fill the demand, causing performance to drop quite rapidly. But with Ethernet and fibre channel speeds already in the 10gbps range, and poised to go even higher, memory swapping need no longer be limited by the relatively pokey hard drive interface.

In its simplest form, this idea could be implemented as a server with a large amount of RAM, and a fast connection to all of the computers on the network. Individual computers could request swap space on the server, and while it wouldn't be quite as fast as native RAM, it should be a substantial improvement over caching to a hard drive.

But for every computer requiring more RAM than it has available, there are likely several others that aren't making use of the memory they have. The server could then itself swap out the contents of its own memory to other computers on the network. In essence, the entire pool of RAM installed in all of the computers on the network becomes available to use as swap space.

Of course, networks are inherently unreliable, so it would require some clever data management to keep the engineering workstation designing the critical part under a tight deadline from crashing when the secretary reboots the damn office PC because Word keeps printing garbage. To that end, individual computers would also be responsible for caching memory locally, in case the pages they want no longer exist on the network for some reason when they go back to retrieve them. Although this might seem to negate the performance benefits of this system, it wouldn't really be an issue as long as memory is constantly being cached in the background while other applications are running.

However, this doesn't help much when more active memory is requested than is currently available on the local computer, either because it's all in use or because the system hasn't had a chance to cache the unused pages yet. In that case, the computer could make a special request to the swap server, essentially saying "I need memory RIGHT NOW. Here's a page of memory, but just hold on to it for a bit because I'll want it back very soon." With such a request, the server won't farm out the page to any other computers on the network, but will retain it instead in its own memory. In this way, the computer doesn't have to bother with caching it locally, because it can expect with a high degree of certainty that the page will still be available when needed. However, it's expected that the computer will ask for the page back as soon as possible, and so there's an upper limit on how much memory the server will allow to be stored this way.

Since RAM tends to be the limiting factor in performance—far more these days than processor speed, hard drive space, or any other specification—having such a system in place could extend the useful life of practically every computer on the network. The money saved by not upgrading hardware as often could then be put into upgrading the network, which would in turn improve the performance of all of the individual computers.

ytk, Jun 23 2011

[link]






       Distributed processing is by no means a new concept.   

       It's not the hardware; it's getting the operating system right to make efficient use of distributed resources.   

       Widely Known To Exist.
8th of 7, Jun 23 2011
  

       //Widely Known To Exist// err... cache-clustering ? never heard of it.   

       The OS of course would have to be able to differentiate between recovery data and "live" data.   

       [+]
FlyingToaster, Jun 23 2011
  

       This sort of thing was experimented with when Transputers first came out.   

       It doesn't really matter whether the OS shares the tasks, the memory containing the tasks, or both. The trick is to handle the swaps efficiently.   

       The Meiko Computing Surface was a good attempt at that. <link>   

       It's all about moving away from the classic Von Neumann architecture.   

       // RAM tends to be the limiting factor in performance //   

       No, the OS and the apps are the "limiting factor". Try running Native DOS 3.3 on a Pentium 4 ... quite quick. Of course it can't make use of the huge amounts of RAM now available, or your SATA 1Tb HDD. But the point is that a lot of the Mips on modern pc's are sucked up by non-user tasks within the OS, plus the pretty-pretty UI.   

       OSs from some Well Known Major Vendors are best described as "bloatware" compared to their predecessors. Yes, Bill, you. We're looking at you. See that effigy on the top of the bonfire ... ?   

       PCs actually have plenty of "grunt" available for number crunching if you don't piss it away on lapse/dissolve window transitions ...   

       <end of unintended but relevant rant>
8th of 7, Jun 23 2011
  

       //It's all about moving away from the classic Von Neumann architecture//
I'm pretty sure the transputer was a von Neumann architecture device. Sure as Hell wasn't Harvard.
{scuttles off to find old Inmos manuals}
AbsintheWithoutLeave, Jun 23 2011
  

       // No, the OS and the apps are the "limiting factor". Try running Native DOS 3.3 on a Pentium 4 ... quite quick. //   

       Sure, but try getting, say, Firefox to run on DOS.   

       Yes, software does tend to get progressively more bloated with time. This is inevitable, since as more resources become generally available software developers will naturally take advantage of them, either to expand the feature set or simply out of an unwillingness or inability to optimize.   

       The net result is that as you add new programs, or even just if you upgrade your software on a regular basis, your system will tend to use more memory over time. And once you hit the ceiling, and have to rely on virtual memory regularly, your system will slow to a crawl for general usage. And even though you may have a ton of free hard drive space and you never come close to utilizing the full capacity of your processor, if you can't put more RAM in the system you're basically stuck and have to replace it with a new one.   

       So, should software be more efficient and less bloated? That's strictly an academic debate, because in the real world it won't happen for the foreseeable future. We just have to accept that reality and deal with it somehow.
ytk, Jun 23 2011
  

       ... easiest implementation may be ReadyBoost but on a network ram drive. Good idea [+]
I wonder if you can ask windows to do that now: use network drive for ReadyBoost. If you could, then this is trivial to bake. Just share a RAM drive and you're set.
ixnaum, Jun 24 2011
  

       ... you know what? ... This will be easy after all:
1) get linux machine A
2) get linux machine B
3) on machine A: make /dev/shm available on the network
4) on machine B: mount that drive
5) done!
... only thing left is to figure out redundancy
otherwise if A crashes it will take B down along with it.

I'm not on a 1 Gbs network to try this in RL ... but I would sure love to see some benchmarks
ixnaum, Jun 24 2011
  

       Quote from the early-90's:   

       "You can't handle the truth !"   

       (Jack Nicholson, A Few Good Men)   

       // try getting, say, Firefox to run on DOS //   

       Try writing a self-contained windowing web browser to run on an minimal small footprint OS ?   

       // This is inevitable //   

       ... said Bill Gates, as he reclined on his couch stuffed with thousand-dollar bills ...   

       // an unwillingness or inability to optimize //   

       Time for a purge. Spare not even the children, lest the evil persist.   

       // if you can't put more RAM in the system you're basically stuck and have to replace it with a new one //   

       Awesome. You say that like you actually believe it.   

       // should software be more efficient and less bloated //   

       Is that a trick question ?   

       // We just have to accept that reality and deal with it somehow //   

       See above, particularly the bit about not sparing the children ...
8th of 7, Jun 24 2011
  

       // Try writing a self-contained windowing web browser to run on an minimal small footprint OS ? //   

       Not what I said, but sure. Go for it. Why aren't you using DOS and writing all your own applications? Nobody's stopping you. The rest of us will just keep using our bloated, inefficient software to actually get stuff done.   

       // Awesome. You say that like you actually believe it. //   

       If I need to run programs X, Y, and Z simultaneously, and the sum total of memory required to run those programs efficiently is more than my computer supports, what else do you propose I do?
ytk, Jun 24 2011
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle