Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Birth of a Notion.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


           

Please log in.
Before you can vote, you need to register. Please log in or create an account.

Variable make-work challenge/response protocol

Minimize server resource hogging during DOS attacks
  (+5)
(+5)
  [vote for,
against]

There are a wide variety of challenge/response protocols used in computer security. Many of them, however, have the unfortunate disadvantage that they require significant CPU time to issue each challenge or evalutate each response, they require the server to keep track of each challenge issued, or both. In most cases, such resource utilization would not be a problem. Unfortunately, if a server is being hit with a denial-of-service attack, a protocol that requires significant resources may easily overwhelm even a powerful server.

To avoid this problem, I would suggest equipping servers with a preliminary security layer which would require that a client wishing to connect perform a certain amount of work before the server will allocate any persistent resources or spend too much time on authentication. Once the client performed that amount of work, it would be allocated some server resources and allowed one attempt to enter the main security system.

The protocol would require the definition of a fast, non-invertible function f(n) which would hash 64 bits of data, yielding 64 bits of hash. The actual specifics of such function are not relevant here beyond the fact that it should be fast and non-invertible. The protocol would also require a means of generating unpredictable 64-bit random numbers on an occasional basis. There would be some global state variables to keep track of how many intrusion attempts there have been recently; these would be distilled into a value 'k' which would increase when there were too many intrusion attempts that passed preliminary security and decrease when there were not. Depending upon the complexity of the hash function and the amount of intrusion, 'k' might range from 8 to 24 or so.

Once per second, the protocol handler would generate a 64-bit random number and store it in a table using a 16-bit index. The items could be stored in slot 0, slot 1, etc. wrapping at some convenient number, or in any other order desired.

When a computer asks to connect to the server, the server should xor the IP of the client with the last random number and compute the hash of that (call this H1). The server should compute the hash of H1 (called H2), clear the rightmost 'k' bits of H1, and set the (k+1)th bit. Finally, it should send the index of the last random number, H1 (as altered), and H2. Once it has done these things, need remember nothing about the request.

The client should take the received H1 value and try different values for the least-significant bit until it finds a value which, when hashed, yields H2. It should send the server a packet containing the index and the H1 value it found.

The server will then take the index, find the appropriate random number in the table, xor it with the client IP address, and hash it. The result of that hash should match what the client sent.

The preliminary security layer would not be designed to be invulnerable, but rather to minimize the effect of denial-of-service attacks. The expectation would be that the server could afford to allocate resources every time an intruder got through the first layer, provided that didn't happen too often; there would be a big difference between having to allocate resources to a few bogus connection attempts per second, versus allocating them for many thousands.

Responding to a challenge request would only require computing two hashes; validating a response would require one. The time required for the server to handle the requests would be independent of 'k', even though the client-side time required would increase exponentially.

If it was deemed acceptable for the server to be confronted with 100 packets/second that pass the first layer, and it was acceptable to force a user to take a minute for authentication during a denial-of-service attack, an attacker would have to have 6,000 times as much computing horsepower as a legitimate user in order to prevent those targets from being met. Although some attackers would be able to muster that sort of power against some targets, it would almost certainly be beyond the abilities of most "script kiddies".

supercat, May 09 2006

[link]






       In what type of environment do you think a minute would be an acceptable authentication time, [supercat]?
methinksnot, May 09 2006
  

       It would only take a minute in cases where the server was under a DOS attack by someone with access to 6,000 times as much CPU horsepower as the legitimate user. In the presence of such an attack, a server which could not quickly reject most of the bogus access attempts would be forced to throw out a significant portion of incoming packets without looking at them. Once that started happening, legitimate users would have a very difficult time getting in at all. When such a powerful adversary isn't attacking, the server's 'k' value would be smaller, allowing users to log in much more quickly.   

       True, having to spend a minute logging in would be annoying, but it would be a big improvement over trying to log into a completely-swamped server.
supercat, May 09 2006
  

       Fair enough. Good concept for absolutely positively must-remain-on servers.
methinksnot, May 09 2006
  

       If the protocol were standardized, it could be usable for servers that need to provide access to the public. Shared-secret protocols exist for servers that need to provide access to a select group of people while being resistant to DOS attacks. Such protocols are not difficult. The advantage here is that no advance coordination is required between server and client.
supercat, May 10 2006
  

       Seems like those throwaway character recongition generators used to hold your place in online queues [5up3rC2t]. I think also that is a small price to pay for security, compared to the amount of hassle users face trying to board a unstable server.
reensure, May 10 2006
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle