Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:I wonder... (Score 1) 102

Yes boiling at this temperature is useful. Makes it easier to separate the hot from the cold, the equipment can be immersed in the liquid form, heat it up and it automatically separates the part that needs cooling and re-condensing.

Transporting the hot part becomes easy, the system has a natural pump cycling the atoms around driven off the heat. So all that heat energy is more usefully absorbed by the system (into kinetic energy), you are not putting additional energy in (such as a liquid pump) which also requires cooling itself.

Comment Re:code review idea (Score 1) 447

Would love to help maintain it, but committers are too busy.

First they need to switch to git as the main tree (maybe they already did this since I last looked properly).
Second they need to setup gerrit code review and allow anyone and everyone to submit patches and review patches.
Third they need to setup some kind of unit testing and code coverage framework (I wrote a testing tool sslregress once to validate a change I made to fix a long standing API oversight, this tool 'sslregress' does provide a framework to be able to stress test network interaction between two SSL endpoints in ways you can not ordinarily test, it could easily be extended to send garbage data inside valid SSL/TLS records). But someone explain how this can actually make it into the code base ? Who do I need to f**k ?

From my point of view the OpenSSL maintainers are in their ivory tower and that is the way they like it. Maybe it helps keep their revenue streams up ? Since those committers are also part of the official support teams.

Comment Re:Not malicious but not honest? (Score 1) 447

It might contain a length due to cipher block padding ? Is the SSL/TLS record length guaranteed to have 1 byte granularity for all supported block cipher modes and methods?

It might contain the length to allow the protocol to be extended at a later date, by putting additional data after the heartbeat echo payload. Because version 1 of the feature included the length, the data that may exist afterwards can be specified in version 2 of the feature but version 1 systems can still interact as-if it was version 1.

My question is... why is the correct action to silently discard the record ? Surely a malformed heartbeat record should result in a TLS protocol error response, with no further inbound or outbound data process (except to flush the error advise record to the other end) and a closed connection ?

Comment Re:Not malicious but not honest? (Score 1) 447

Huh no.... the developer who put in the TLS Heartbeat support tested it by sending valid and well formed data.

To expose this bug you have to send a validly authenticated SSL3 record, but intentionally modify a length attributes that is inside it and designed to convey the length of the heartbeat payload data. This length might overrun the available length of data left inside the SSL3 record. The failure was not performing this bounds checking, is the heartbeat payload length longer than the remainder of the available data left inside the SSL3 record. I presume a TLS heartbeat is not allowed to cross SSL3 records and the limit of an SSL3 record is 64Kb as mandated by the TLS protocol.

So no OpenSSL doesn't crash because when tested against itself the data was always well formed and valid.

Comment Re:Does the "fix" include scrubbing? (Score 1) 149

Ignoring the performance hit with this (that many application won't take)

Often the kernel pages are allocated in 4Kb chunks, this means maybe you covered to the end of the current 4Kb page (that is managed by OPENSSL's custom allocator).

But the system (libc) allocator is not overwriting released block, and the next 60+ Kb should be managed by this, which is not overwirting all released blocks.

Comment Re:It's time we own up to this one (Score 1) 149

You can augment custom malloc implementations to notify your memory testing tool of choice.

What memory testing tool does not have a mechanism to do this ? Someone just needs to setup a C language macro on the exit of the custom malloc() and on entry to the custom free() that passes the argument. Then your tool of choice can do stuff at this point.

Comment Re:It's time we own up to this one (Score 1) 149

No you confuse SSL the protocol with the way that it is used in browsers for HTTPS.

This (the way the browse makes use of SSL) can always be changed without affecting SSL and without using something other than X.509 certificates. Just change the validation mechanism of the X.509, you don't have to rip up what already exists to make it better.

Comment Re:It's time we own up to this one (Score 1) 149

Use git as the primary repository for OpenSSL and then implement a system that all incoming patches via gerrit after public review.
Now everyone and anyone can review every patch, those that complain there are not enough helpers on the project, this is why you did not update the process to let the help arrive, things are pushed through mailing list and RT tracker (some obsolete ticket system).

Setup a points system under gerrit that requires at least 2 official committers and 2 independent parties on the internet to review every patch.

Make it a rule that every new feature must have a unit test associated with it to exercise it, or at least the built in applications updated to enable/disable/utilise it.

Fix that archaic PhD boffin coding convention the project uses, take a look at the Linux kernel source and use this style as a starting point, the OpenSSL coding style doe not lean towards conventions that reduce mistakes.
Relegate all style gymnastics that are there to allow a compile that is over 10 year old to build the project, move this to another project as a compatibility layer.
Take a look at seeing of LLVM can instrument the C code during compile to provide code coverage tests, then write tests that utilize all the existing applications, then work on new applications that can exercise the major areas not touched.

Comment Re:Eagerly awaiting ickle benchmarks (Score 1) 46

And the great thing about this project is that if a graphic hardware vendor needs to choose what to spend 1000hours to work on, it can now be the 3D driver instead of a 2D driver. Because someone else created a layer to use that 3D driver for all 2D operations.

Previously the money/time was spent on the 2D to facilitate the largest target audience for the hardware and unfortunately that 2D effort could not be utilize for the linux 3D use case, so no wonder improvement to it were hard to get.

So expect those crappy 3D drivers to get good in the areas needed by the 2d-over-3d code real soon and while they are in there they might as well work on all the low hanging fruit concerning 3d usage.

So won't apply your previously knowledge on linux 3d driver support against what the future will now bring to both the 2d and 3d scene.

Comment Re:Depends on the threat model, doesn't it? (Score 1) 279

Wooossshhhh!

He (somenickname) is talking about the global CA system where all 1000 CAs are equally trusted, so the NSA only need to convince one to reissue a certificate (based on a private key the NSA provided) in the name of the target website they wish to intercept.

The content consumer has no way of knowing if the SSL cert that is being used for the HTTPS connection is the one using the site owner's private key or the one using the NSA's private key. So this is why simply having a green light because you switched to SSL is security theatre.

But you (kasperd) go on an rant about other matters.

The projects such as SSL Observatory https://www.eff.org/observator... and Convergence http://convergence.io/index.ht... and http://tech.slashdot.org/story... combined with DNSSEC (which somewhat has the same problems as the CA system, but useful to allow deployment for low security websites without paying sign-my-certificate tax).

Comment Re:I'm sorry I'm an idiot (Score 1) 204

but I don't really understand X11 either

The problem is neither do the maintainers fully understand every corner of every feature, driver and extension in the major X.11 implementations that exist (XFree86/X.Org).

So now we get Linux graphics drivers that target what is really needed (not the performance metric to bitblt specific patterns at specific bit depth to screen) and we de-couple the hardware driver from the display acquisition arbitrator (the software that decides which application get to draw and utilize the hardware at any given time).

Most people are in exactly your position, the best thing to do is let those with knowledge and enthusiasm knock themselves out on trying to produce a better solution to the problem. This is just how progress happens. to me I stopped programming to the X.11 API calls a long time ago, I use a toolkit like Qt and this will not change. So I look forward to an improved graphical experience on all formats of Linux (mobile/pad/notebook/desktop) for the next 30 years from this.

I find Windows currently a better experience for using an IDE, modern X.11 has to much input lag, copy-and-paste that is also laggy and unreliable this really saps productivity.

Comment Re:Bad idea (Score 1) 351

Ah my requirements are that links be bookmarkable (especially across the same users login session. but occasionally between co-workers). As they are business systems that are in constant use and clicking on a link, finding out your session has expired, re-authenticating and then having the link not work, is not good for productivity.

So with this in place you did not provide anything actual flaw in the problem domain in this area, so this is good news to me.

But multiple users of the same system can not obtain secret business information (such as DB Primary Key ID) that might leak data such as how many records you have.

The other stuff you touched on it generally dealt with once enabled by my choice of website application framework, that still means you have to actively test is is enabled and doing its thing in production.

Comment Re:OMG NO NETWORK TRANPARENCY!!!1 (Score 1) 128

Bollox, the developers of X11 are over 30years older now. I think you are confusing the current maintainers of the two most popular X11 implementations with the actual developers who came up with the original ideas. The extensions over the past 15 year rise of Linux popularity have had to restrict themselves to the design choices made over 30 years so. It is plenty overdue a revamp, silicon has changed to much in that time.

Comment Re:I can almost imagine how it might be done (Score 1) 351

Re: XOR and real cipher

Use both. Initialize a symmetric cipher server side for your webapp (so keys are hot/high performance). Then for each thing you need to encode XOR the raw database PK ID first then pass it through the cipher. This way database ID 1 for every thing you do does not end up with the same ciphered result.

For extra points many symmetric ciphers use larger block sizes than the 64bit you actually need for your database PK ID, so pad left and right bits with random garbage.

For more points use part of the unused bits (of the cipher block size) also as a form of checksum/CRC, that can be used to detect corruption/brute forcing. No real web request should get this wrong (unless you have bugs, but you can mark the client as being suspect).

But who brute forces larger than 128bits over the Internet.

Slashdot Top Deals

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...