Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Baker Street Irregulars

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


         

double overclock for 1/2 second

Overclock CPUs to double speed, let them cool down, rotate which one you use, some particular kinds of programs benefit, and compilers can optimize around it
 
(+1, -1)
  [vote for,
against]

During 2020 3.4 GHz clock speed for a multicore desktop PC is nothing special. Depending on the actual computer program and its flow a 6.8 GHz PC would double its speed.

When you first turn your computer on, it is cold. The CPU could run at 6.8 GHz for say an entire .5 second before being either unreliable or compromising its service life.

So, when you turn your computer on, why not schedule a task you want to do twice as fast and run the CPU at 6.8 GHz? A lot of computer programs are less than 3.4 (or 1.7?) Billion cycles to complete.

This brings up something where a person with equations and simulations might know what they are talking about: Are there any compilable useful computer programs where at say 8 cores, keeping seven cores cold, but running 1 core at 6.8 GHZ, then hopping over to a cold new core each .5 second allows the program to do its thing at 6.8GHz the entire time and finish executing (do the big calculation) faster, even twice as fast.

Twice as fast noting the compiler did what it could with highly serial, highly non-parallel code. I think some spreadsheets would qualify.

Another thing that might benefit from this, and I do not know what this is called on the chip, is to run the data comminutter (data chopper) twice as fast, say you have the ability to make a serial port faster than a desktop PC. The desktop PC uses the cerial port and has a thing that tosses data into 1 of 8 piles as it arrives. By tossing the data into 8 piles, (for perusal later) something kind of like "bandwidth" goes up. That's a data comminuter. So if the data comminutor can run cold core at double speed, and rotate around the 8 cold cores, then it can always comminute data at doublespeed (6.8 GHz) doubling the amount of bits the computer can use that come through the computer's serial connection. That reminds me of fiber optics.

-> does anyone actually know what the data comminuter is called? Its all digital so its not a d/a a/d converter.

Similarly, at a solid state drive (SSD) (memory chip disk) could you overclock 1/8th of your SSD to double speed if you knew you were only going to look at it every few seconds (time to cool down). If you have a physical configuration that separates the chips, or software smart enough to purposefully speak to different chips, basically a RAID situation, it looks like you can double the speed of an SSD that is say only 1/2 or 1/4 full. That gives SSD makers, operating system makers, program writers an incentive to include "I want this Chip! double speed producing RAID" in their products.

So, the compiler looks at the code, and uses this new thing: cold cores, to decided what parts of the computer program are going to run where, going so far as to completely turn off a core so it physically cools down completely, just to run a cold-core segment of program code on it.

And, Here's the nifty part,

8 cores, only one running, but at double speed might halve all latency and UI delay at web browsing. Latency, microfreezing, and delay at web browsing may have already ceased to exist. My computer is kind of strange.

beanangel, Dec 16 2020

[link]






       I think the problem comes from software developers, not the hardware. (It seems...) whenever hardware becomes newer/faster/better, software is updated to push it to it's absolute limit. So ANY glitch results in a slowdown.
I think (being an engineer...) that software developers should think more like crane designers: know the absolute limit, but incorporate a safety factor for "normal usage", so things aren't always pushed close to breaking point. Probably don't have to go as far as a crane (safety factor is generally 5 or more) but at least 1.2 or so.
neutrinos_shadow, Dec 16 2020
  

       // I think the problem comes from software developers //   

       ... or possibly management, Pointy-Haired Bosses, and worst of all the mouth-breathing witless dolts in sales and markeing.   

       Left to themselves, most software developers would spend their days handcoding brilliant little crumbs of tightly-wound pure assembler, that ran like greased lightning and used almost no memory... but then there'd never be a saleable product.
8th of 7, Dec 16 2020
  

       Individual core OC and heat balancing/load switching is well baked in Ryzen chips. Don't know about Intel. And yes, it runs like a campaigning Republican through the ghetto. (or like a campaigning Democrat through the business district)
Voice, Dec 16 2020
  

       Left to themselves, most software developers would be strung out on Monster Cola and three-day-old pizza playing Overwatch in a crumbling warehouse with no indoor plumbing.
RayfordSteele, Dec 16 2020
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle