Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Encryption (Score 1) 220

What is the problem with the CRIME attack and header compression ?

Just add an XOR string to the Cookie header, that is applied against the other data fields. This XOR itself can change each time a Cookie header is emitted. Now you have a non-repeating, pseudo random input for the compressor to work on. But the other party can apply a transform to the Cookie header to get back original data.

For good measure also add an additional Random-Nonce-Field-1="random-length-data" which is simply ignored and discarded by the other end. Now you can perterb the compressor in both directions, by applying a completely useless to the attacher the same data (allow it to compress) but also a Random-Nonce-Field-2 which might be different for each header, like the XOR but completely useless and to be ignored data.

Now it is upto the researches to use these tools (added to a Cookie spec change) to come up the most CPU cost efficient way to utilize them to make CRIME and other such attacks not viable.

Or maybe I am missing something glaring here ?

Comment Re:Which is why sometimes small engines ... (Score 1) 238

Heh, except for the matter that if they don't comply with whatever regulations come out they don't get approval.

While retrospective regulation on old cars is a political matter (pissed of public force to comply) the regulations affecting brand new cars is not. This is a matter for the auto industry to solve and the public don't care.

Retrospective regulations changes (that affect a significant number of the population) are rare events.

Plus there is that small matter of driving on the correct side of the road (that we do). So this does influence the which hand drive the car is.

Comment Re:I considered doing the same myself (Score 1) 139

Huh...

Neither suggests access was explicitly or implicitly DENIED to third parties.
All someone else was doing was taking a photo of you.

Oh you have a Terms & Conditions document in your back pocket do you!

robots.txt is great and all, but someone did actually sit there pressing a button for each website hit, the button generated a random number and this number was used to randomize the delay and User-Agent data. It was under 2500 hits after all, sheesh I can hit ebay that many times just by browsing their catalogue for an hour.

Comment Re: But is it even usable? (Score 1) 208

Hmm but you should be having your RAID system perform:

* verification (checking a read to all copies and CRC validates correctly as expected)

* scrubbing (writing some other random patterns to each block of the disk, to confirm the disk is in good order, and will take new data, it also re-energises the disk, the original data is then written back into the block, and then verified, before it moves on to another part of the disk, this operation often requires battery backed memory, since the original data is preserved robustly this way over unwanted power outage).

Ideally you should verify (the whole storage) at least once per week, and scrub (the whole storage) once per month. These operations with hardware cards can be performed slowly in the background, but often a few hours a day during offpeak will do the job.

Doing this alone can extend the life of disks, compared to writing some block of data, no accessing it for 5 years, then wondering why in 5 years time the block is no corrupted.

Both these operations provide a better health check of RAID than SMART along, since SMART only knows of a problem after it saw a problem, and that often requires you to access the problem area of disk. This is what verification/scrubbing does on your behalf continuously over a week.

Comment Re:I wonder... (Score 1) 102

Yes boiling at this temperature is useful. Makes it easier to separate the hot from the cold, the equipment can be immersed in the liquid form, heat it up and it automatically separates the part that needs cooling and re-condensing.

Transporting the hot part becomes easy, the system has a natural pump cycling the atoms around driven off the heat. So all that heat energy is more usefully absorbed by the system (into kinetic energy), you are not putting additional energy in (such as a liquid pump) which also requires cooling itself.

Comment Re:code review idea (Score 1) 447

Would love to help maintain it, but committers are too busy.

First they need to switch to git as the main tree (maybe they already did this since I last looked properly).
Second they need to setup gerrit code review and allow anyone and everyone to submit patches and review patches.
Third they need to setup some kind of unit testing and code coverage framework (I wrote a testing tool sslregress once to validate a change I made to fix a long standing API oversight, this tool 'sslregress' does provide a framework to be able to stress test network interaction between two SSL endpoints in ways you can not ordinarily test, it could easily be extended to send garbage data inside valid SSL/TLS records). But someone explain how this can actually make it into the code base ? Who do I need to f**k ?

From my point of view the OpenSSL maintainers are in their ivory tower and that is the way they like it. Maybe it helps keep their revenue streams up ? Since those committers are also part of the official support teams.

Comment Re:Not malicious but not honest? (Score 1) 447

It might contain a length due to cipher block padding ? Is the SSL/TLS record length guaranteed to have 1 byte granularity for all supported block cipher modes and methods?

It might contain the length to allow the protocol to be extended at a later date, by putting additional data after the heartbeat echo payload. Because version 1 of the feature included the length, the data that may exist afterwards can be specified in version 2 of the feature but version 1 systems can still interact as-if it was version 1.

My question is... why is the correct action to silently discard the record ? Surely a malformed heartbeat record should result in a TLS protocol error response, with no further inbound or outbound data process (except to flush the error advise record to the other end) and a closed connection ?

Comment Re:Not malicious but not honest? (Score 1) 447

Huh no.... the developer who put in the TLS Heartbeat support tested it by sending valid and well formed data.

To expose this bug you have to send a validly authenticated SSL3 record, but intentionally modify a length attributes that is inside it and designed to convey the length of the heartbeat payload data. This length might overrun the available length of data left inside the SSL3 record. The failure was not performing this bounds checking, is the heartbeat payload length longer than the remainder of the available data left inside the SSL3 record. I presume a TLS heartbeat is not allowed to cross SSL3 records and the limit of an SSL3 record is 64Kb as mandated by the TLS protocol.

So no OpenSSL doesn't crash because when tested against itself the data was always well formed and valid.

Comment Re:Does the "fix" include scrubbing? (Score 1) 149

Ignoring the performance hit with this (that many application won't take)

Often the kernel pages are allocated in 4Kb chunks, this means maybe you covered to the end of the current 4Kb page (that is managed by OPENSSL's custom allocator).

But the system (libc) allocator is not overwriting released block, and the next 60+ Kb should be managed by this, which is not overwirting all released blocks.

Comment Re:It's time we own up to this one (Score 1) 149

You can augment custom malloc implementations to notify your memory testing tool of choice.

What memory testing tool does not have a mechanism to do this ? Someone just needs to setup a C language macro on the exit of the custom malloc() and on entry to the custom free() that passes the argument. Then your tool of choice can do stuff at this point.

Comment Re:It's time we own up to this one (Score 1) 149

No you confuse SSL the protocol with the way that it is used in browsers for HTTPS.

This (the way the browse makes use of SSL) can always be changed without affecting SSL and without using something other than X.509 certificates. Just change the validation mechanism of the X.509, you don't have to rip up what already exists to make it better.

Comment Re:It's time we own up to this one (Score 1) 149

Use git as the primary repository for OpenSSL and then implement a system that all incoming patches via gerrit after public review.
Now everyone and anyone can review every patch, those that complain there are not enough helpers on the project, this is why you did not update the process to let the help arrive, things are pushed through mailing list and RT tracker (some obsolete ticket system).

Setup a points system under gerrit that requires at least 2 official committers and 2 independent parties on the internet to review every patch.

Make it a rule that every new feature must have a unit test associated with it to exercise it, or at least the built in applications updated to enable/disable/utilise it.

Fix that archaic PhD boffin coding convention the project uses, take a look at the Linux kernel source and use this style as a starting point, the OpenSSL coding style doe not lean towards conventions that reduce mistakes.
Relegate all style gymnastics that are there to allow a compile that is over 10 year old to build the project, move this to another project as a compatibility layer.
Take a look at seeing of LLVM can instrument the C code during compile to provide code coverage tests, then write tests that utilize all the existing applications, then work on new applications that can exercise the major areas not touched.

Comment Re:Eagerly awaiting ickle benchmarks (Score 1) 46

And the great thing about this project is that if a graphic hardware vendor needs to choose what to spend 1000hours to work on, it can now be the 3D driver instead of a 2D driver. Because someone else created a layer to use that 3D driver for all 2D operations.

Previously the money/time was spent on the 2D to facilitate the largest target audience for the hardware and unfortunately that 2D effort could not be utilize for the linux 3D use case, so no wonder improvement to it were hard to get.

So expect those crappy 3D drivers to get good in the areas needed by the 2d-over-3d code real soon and while they are in there they might as well work on all the low hanging fruit concerning 3d usage.

So won't apply your previously knowledge on linux 3d driver support against what the future will now bring to both the 2d and 3d scene.

Comment Re:Depends on the threat model, doesn't it? (Score 1) 279

Wooossshhhh!

He (somenickname) is talking about the global CA system where all 1000 CAs are equally trusted, so the NSA only need to convince one to reissue a certificate (based on a private key the NSA provided) in the name of the target website they wish to intercept.

The content consumer has no way of knowing if the SSL cert that is being used for the HTTPS connection is the one using the site owner's private key or the one using the NSA's private key. So this is why simply having a green light because you switched to SSL is security theatre.

But you (kasperd) go on an rant about other matters.

The projects such as SSL Observatory https://www.eff.org/observator... and Convergence http://convergence.io/index.ht... and http://tech.slashdot.org/story... combined with DNSSEC (which somewhat has the same problems as the CA system, but useful to allow deployment for low security websites without paying sign-my-certificate tax).

Comment Re:I'm sorry I'm an idiot (Score 1) 204

but I don't really understand X11 either

The problem is neither do the maintainers fully understand every corner of every feature, driver and extension in the major X.11 implementations that exist (XFree86/X.Org).

So now we get Linux graphics drivers that target what is really needed (not the performance metric to bitblt specific patterns at specific bit depth to screen) and we de-couple the hardware driver from the display acquisition arbitrator (the software that decides which application get to draw and utilize the hardware at any given time).

Most people are in exactly your position, the best thing to do is let those with knowledge and enthusiasm knock themselves out on trying to produce a better solution to the problem. This is just how progress happens. to me I stopped programming to the X.11 API calls a long time ago, I use a toolkit like Qt and this will not change. So I look forward to an improved graphical experience on all formats of Linux (mobile/pad/notebook/desktop) for the next 30 years from this.

I find Windows currently a better experience for using an IDE, modern X.11 has to much input lag, copy-and-paste that is also laggy and unreliable this really saps productivity.

Slashdot Top Deals

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...