Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Technological solution (Score 2) 382

Heh, because the stock they sell immediately of one of the other 999,999 stocks they hold in the same entity. Those stocks already had their 1 minute minimum holding period expire a long time ago.

This is also why it is funny when people say that pension funds hold their stock for a long term view.

But what can happen is the two pension funds collude to exchange assets with each other (zero sum game) over a period of time, so the fee levied for any transfers can be taken by all the snouts in the transaction cost trough. Yes if you stand back and look at the week to week they look like their are holding their positions a for the long game, but actually they found a way to extract additional profits to pay pension pot "fund managers" their bonuses.

Comment Re:Encryption (Score 1) 220

Because the point of the compression is to compress the Content Body Payload transparently (and potentially the HTTP header names and keywords) at the TLS streaming level.

It only makes compression useless for the "Cookie" header which is exactly what is needed to defeat CRIME.
All security sensitive data like this should be able to be trivially fuzzed. Maybe a better scheme would be to implement:

Fuzz-XOR-Key: 0123456789abcdefxyz/+===
Fuzz-Cookie: $version=1; $foobar="123"; $random-nonce-1="192jsk232"; SESSIONID="0123456789secretXOREDresultHER"; $random-nonce-99="982kmn323"; $fuzzed="SESSIONID";

NOTE: Its been a while since I looked at a Cookie header directly, there are probably some major syntax mistakes in the above example.

Now you can extend it to any other kind of header using a common key and transformation technique, by prefixing headers with Fuzz-* and writing an RFC/IETF document on how the key is applied to which parts and when of the header value data.

Your suggestion of disabling compression in SSL/TLS support is already implemented.

Comment Re:Annoying. (Score 5, Informative) 347

Very similar to how it works in the UK.

A business called "BT Wholesale / aka OpenReach" operates as a corporate entity in its own right, that the government regulates. They more of less have last mile monopoly over the old British Telecom (which used to be the incumbent single telephone operator that was originally a public entity). So this was made private maybe 20 years ago but with certain caveats.

Such as a uniform pricing policy to all other telecom operators wishing to buy their wholesale services. Think like FRAND, as opposed to scheming and back office deals to maintain pricing.

Such as not offering the full package, i.e. only offering wholesale services. A regular home or business consumer never buys directly anything from the wholesale division. The end customer buys from the many (more than 500 in our little island) brand names, who in turn pay the wholesale rental fees out of your subscription.

Such as allowing politicians to have influence (through regulation) over certain aspects of governance. This is a good thing when there is a last mile monopoly, there is at least some kind of elected accountability. Especially when the government paid for the original construction of the network.

There is of course a parallel cable network now, that also have their own independent last mile. So in almost all urban/suburban locations another option exists, but BTs copper POTS network has a much higher coverage.

There also exists some areas (such as Kingston and Hull) which ended up with their own last mile services that operate their own telecoms independently.

Here in the UK now (with BT wholesale) the whole country is getting more street side cabinets (to within of 100 meters of every urban and suburban location) and fibre optics installed to those cabinets back to the local exchange site. The last 100 meters is still largely delivered over copper but at speeds around 80MBit/20Mbit, but I'm sure further speed increases will take places like ADSL/ADSL2/ADSL2+ in the future. This national roll out is over half way through and I'm sure within the next 3 years the original plan will be complete.

There are still issues with many rural locations being on dialup quality, hopefully as cellular like technology improves this could be utilized as back haul for rural locations. Rural in the UK might mean just being 8 miles out of town.

Comment Re:Encryption (Score 1) 220

What is the problem with the CRIME attack and header compression ?

Just add an XOR string to the Cookie header, that is applied against the other data fields. This XOR itself can change each time a Cookie header is emitted. Now you have a non-repeating, pseudo random input for the compressor to work on. But the other party can apply a transform to the Cookie header to get back original data.

For good measure also add an additional Random-Nonce-Field-1="random-length-data" which is simply ignored and discarded by the other end. Now you can perterb the compressor in both directions, by applying a completely useless to the attacher the same data (allow it to compress) but also a Random-Nonce-Field-2 which might be different for each header, like the XOR but completely useless and to be ignored data.

Now it is upto the researches to use these tools (added to a Cookie spec change) to come up the most CPU cost efficient way to utilize them to make CRIME and other such attacks not viable.

Or maybe I am missing something glaring here ?

Comment Re:Which is why sometimes small engines ... (Score 1) 238

Heh, except for the matter that if they don't comply with whatever regulations come out they don't get approval.

While retrospective regulation on old cars is a political matter (pissed of public force to comply) the regulations affecting brand new cars is not. This is a matter for the auto industry to solve and the public don't care.

Retrospective regulations changes (that affect a significant number of the population) are rare events.

Plus there is that small matter of driving on the correct side of the road (that we do). So this does influence the which hand drive the car is.

Comment Re:I considered doing the same myself (Score 1) 139

Huh...

Neither suggests access was explicitly or implicitly DENIED to third parties.
All someone else was doing was taking a photo of you.

Oh you have a Terms & Conditions document in your back pocket do you!

robots.txt is great and all, but someone did actually sit there pressing a button for each website hit, the button generated a random number and this number was used to randomize the delay and User-Agent data. It was under 2500 hits after all, sheesh I can hit ebay that many times just by browsing their catalogue for an hour.

Comment Re: But is it even usable? (Score 1) 208

Hmm but you should be having your RAID system perform:

* verification (checking a read to all copies and CRC validates correctly as expected)

* scrubbing (writing some other random patterns to each block of the disk, to confirm the disk is in good order, and will take new data, it also re-energises the disk, the original data is then written back into the block, and then verified, before it moves on to another part of the disk, this operation often requires battery backed memory, since the original data is preserved robustly this way over unwanted power outage).

Ideally you should verify (the whole storage) at least once per week, and scrub (the whole storage) once per month. These operations with hardware cards can be performed slowly in the background, but often a few hours a day during offpeak will do the job.

Doing this alone can extend the life of disks, compared to writing some block of data, no accessing it for 5 years, then wondering why in 5 years time the block is no corrupted.

Both these operations provide a better health check of RAID than SMART along, since SMART only knows of a problem after it saw a problem, and that often requires you to access the problem area of disk. This is what verification/scrubbing does on your behalf continuously over a week.

Comment Re:I wonder... (Score 1) 102

Yes boiling at this temperature is useful. Makes it easier to separate the hot from the cold, the equipment can be immersed in the liquid form, heat it up and it automatically separates the part that needs cooling and re-condensing.

Transporting the hot part becomes easy, the system has a natural pump cycling the atoms around driven off the heat. So all that heat energy is more usefully absorbed by the system (into kinetic energy), you are not putting additional energy in (such as a liquid pump) which also requires cooling itself.

Comment Re:code review idea (Score 1) 447

Would love to help maintain it, but committers are too busy.

First they need to switch to git as the main tree (maybe they already did this since I last looked properly).
Second they need to setup gerrit code review and allow anyone and everyone to submit patches and review patches.
Third they need to setup some kind of unit testing and code coverage framework (I wrote a testing tool sslregress once to validate a change I made to fix a long standing API oversight, this tool 'sslregress' does provide a framework to be able to stress test network interaction between two SSL endpoints in ways you can not ordinarily test, it could easily be extended to send garbage data inside valid SSL/TLS records). But someone explain how this can actually make it into the code base ? Who do I need to f**k ?

From my point of view the OpenSSL maintainers are in their ivory tower and that is the way they like it. Maybe it helps keep their revenue streams up ? Since those committers are also part of the official support teams.

Comment Re:Not malicious but not honest? (Score 1) 447

It might contain a length due to cipher block padding ? Is the SSL/TLS record length guaranteed to have 1 byte granularity for all supported block cipher modes and methods?

It might contain the length to allow the protocol to be extended at a later date, by putting additional data after the heartbeat echo payload. Because version 1 of the feature included the length, the data that may exist afterwards can be specified in version 2 of the feature but version 1 systems can still interact as-if it was version 1.

My question is... why is the correct action to silently discard the record ? Surely a malformed heartbeat record should result in a TLS protocol error response, with no further inbound or outbound data process (except to flush the error advise record to the other end) and a closed connection ?

Comment Re:Not malicious but not honest? (Score 1) 447

Huh no.... the developer who put in the TLS Heartbeat support tested it by sending valid and well formed data.

To expose this bug you have to send a validly authenticated SSL3 record, but intentionally modify a length attributes that is inside it and designed to convey the length of the heartbeat payload data. This length might overrun the available length of data left inside the SSL3 record. The failure was not performing this bounds checking, is the heartbeat payload length longer than the remainder of the available data left inside the SSL3 record. I presume a TLS heartbeat is not allowed to cross SSL3 records and the limit of an SSL3 record is 64Kb as mandated by the TLS protocol.

So no OpenSSL doesn't crash because when tested against itself the data was always well formed and valid.

Comment Re:Does the "fix" include scrubbing? (Score 1) 149

Ignoring the performance hit with this (that many application won't take)

Often the kernel pages are allocated in 4Kb chunks, this means maybe you covered to the end of the current 4Kb page (that is managed by OPENSSL's custom allocator).

But the system (libc) allocator is not overwriting released block, and the next 60+ Kb should be managed by this, which is not overwirting all released blocks.

Comment Re:It's time we own up to this one (Score 1) 149

You can augment custom malloc implementations to notify your memory testing tool of choice.

What memory testing tool does not have a mechanism to do this ? Someone just needs to setup a C language macro on the exit of the custom malloc() and on entry to the custom free() that passes the argument. Then your tool of choice can do stuff at this point.

Comment Re:It's time we own up to this one (Score 1) 149

No you confuse SSL the protocol with the way that it is used in browsers for HTTPS.

This (the way the browse makes use of SSL) can always be changed without affecting SSL and without using something other than X.509 certificates. Just change the validation mechanism of the X.509, you don't have to rip up what already exists to make it better.

Slashdot Top Deals

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...