Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Changing a hash function... (Score 1) 156

The real problem here is that it's fairly easy to compute a set of hash keys that are known to generate collisions on a specific hash table implementation. The easiest fix by far - the fix that perl implemented in 2003 - is to generate a random value when the hash is initialized, and XOR each incoming key with it before processing. That breaks collision prediction on the attacker's side quite effectively.

Comment Re:Priorities (Score 1) 156

To be precise, it elements with equal *exit* hash values - the same hash key will simply overwrite prior values. Internally, the language runs a hash algorithm against the key and uses the resulting value to generate an index to the array that *actually* holds the key/value pair. If multiple keys hash to the same index, then the value will actually be another array, containing all the key/value pairs that mapped to that index. You then need to walk that index to find the key you're looking for.

The downside of this, of course, is that if all of your keys map to the same hash value, then you have to walk the list of *all* key/value pairs to find your value. Producing this scenario on demand is how you kill servers with it.

The "real" code fix so far is to transmute the key with a random value (generated at application startup, or at instantiation of the hash map) before running the hash algorithm, thus making it impossible to predict which keys will generate hash collisions. This is how perl was fixed this back in 2003 :)

Most folks seem to simply be setting limits on the number of fields in POST (or the maximum size of a POST payload) for now until they can fix their code. Putting limits on the number of HTTP headers in a request is needed as well, as apache itself puts headers in a hash map.

Comment Re:Or was it just a lucky piggy back? (Score 3, Interesting) 57

Entirely plausible. Conficker's phone-home mechanism was an algorithm that hashed the current date/time to generate a nonsense domain name, which it would then try to look up and grab a payload from. All the Bad Guys had to do was register one a few hours in advance, put up the payload, and wait. The groups who were fighting the thing managed to decompile the algorithm and play it forward, generating a list of hundreds of thousands of domain names that they then took to the various registries to get blocked. Paul Vixie was a big part of this, and here's a pretty good article on the group.

It would not surprise me at all if CIA/Mossad/etc managed to get one of those domains un-blocked and used to deliver the Stuxnet payload.

Comment Re:the reason she failed is that . . (Score 5, Interesting) 200

More to the point, it seemed that the biggest initiatives within Yahoo while I was there (from 2009 until early this year) were *all* centered around profit, not users - mainly, cost-cutting and ad tech. As if the goal wasn't to grow users, just grow revenue and profit per existing user. What opened my eyes was when the cost-cutting initiatives that made sense - primarily the data center consolidations, which definitely needed to get done ASAFP - started getting pushed back due to the need for quarter-to-quarter profit management. Bartz should have grown a pair, pushed forward the consolidation even if it meant missing the street for the quarter, allowing Yahoo to reap the rewards much sooner.

I'll also never forget the quarterly all-hands meeting where the major product announcement for the quarter was...*full-page ads on the login page*.

Sorry I didn't stick around to see Bartz go, but I couldn't risk her *not* going.

Comment Courtney Love talked about this... (Score 1) 243

http://www.salon.com/technology/feature/2000/06/14/love/print.html

Apparently a "work for hire" provision did get slipped into federal copyright law - and I mean literally slipped in while no one was paying attention. After Love's speech brought attention to this, the provision was repealed a year later.

So unless the laws get changed again (and the RIAA *will* try), the artists have the upper hand. Sad to imagine how much they'll spend in legal fees to get to their money though.

Comment Re:That's not Facebook's problem (Score 3, Informative) 509

The RPC system they're using is Thrift (http://thrift.apache.org/)., which they developed because JSON was becoming a bottleneck. And yeah, there's a metric crapload of memcached in their data centers as well. The multi-hour outage Facebook had late last year was due to a near-complete failure of the memcached layer, resulting in an overload of requests to the main mysql farms.

Slashdot Top Deals

The moon is made of green cheese. -- John Heywood

Working...