Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Bullshit (Score 1) 225

"Add that these proprietary applications and the proprietary Google Play Services are the primary areas for Android innovation and development and you end up with an operating system that is less and less 'free' in the freedom and cost senses of the word."

Bullshit. Charging miniscule amounts of money at the OEM level for Android does not affect my ability to install any application I want (or I have written), change central elements of the OS via addons, or dig down and read the source code if I wish. How, exactly, does Google monetizing Android stop any of that? Because I am locked out on the other platforms but at least on Android I have freedom!

And 75c up front on a device? Yes that WILL make it difficult to buy that new $700 phone I wanted! 75c is a LOT of money when you are spending that much, isn't it? And all of the other ways Android monetizes the platform are mostly indirect, as a result of the ecosystem.

Also, "Google Play Services" are the "primary areas" for "Android innovation", what the hell does that mean? I think TFS must have been written by someone butthurt by how awesome Android is and how popular it has become. Suck it up, Android is awesome.

Comment Re:Exasperated sigh (Score 1) 135

Also (not to double post but), while I could have tested the assertion that browsers do not cache HTTPS resources, I think I will just let StackOverflow handle that one.

By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received

http://stackoverflow.com/questions/174348/will-web-browsers-cache-content-over-https

That's from the accepted answer with 79 up votes. The second answer which has 119 upvotes says:

As of 2010, all modern, current-ish browsers cache HTTPS content by default, unless explicitly told not to.

Comment Re:Exasperated sigh (Score 1) 135

A user should also always be warned when a post request is being resubmitted, since the HTTP RFC's only says get, head, put and delete should be idempotent.

RFC2616:

In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

While I agree with your sentiment here (I would prefer my browser to just not save any POST data at all for tab recovery since its so extremely rare that doing so would actually be useful), the RFC section you quote does not specify anything about persistence of resources to backing storage. It simply indicates the way browsers should treat these resources in a general sense. And according to the verbiage you have provided, Safari, Chrome and Firefox are all working as expected, because that paragraph is actually telling the browsers that automatic resubmission of non-GET requests can (and almost always does) have side effects (such as doing a duplicate post to a forum for instance).

Indeed my original post indicates briefly that I was already well-aware of this distinction between GET requests being idempotent (ie has no side effects) and other requests not being safe for resubmission. All modern standards-compliant browsers warn the users before resending the results of POST requests (Are you sure you want to resubmit?). Furthermore, none of these three major browsers (Chrome, Safari, Firefox) will act upon POST requests when they are pulled out of the session recovery feature.

Although respecting idempotency of non-GET requests is not at issue in this topic, I nonetheless tested this as a side effect (huzzah) of seeing how common the storage of POST data for recovery actually is.

Chrome

Chrome has an interesting behavior when it comes to storing POST data for reopening later, it appears to save the response content of an HTTP POST and does not resubmit the POST request. The user is presented with the content they saw when they initially sent the POST request. Nonetheless one can refresh (with Are you sure you want to resubmit) and the data will be resent in a new POST request. So Chrome is storing HTTP POST data for reuse when the tab is restored.

For HTTPS POST resources however, Chrome will _not_ automatically restore the page content from its backing store (and by implication likely does not save it) and also will not resend the request until the user manually refreshes the page. HOWEVER, at that point all of the original data from the first POST request is resent. Thus HTTPS POST data _is_ persisted by Chrome.

Firefox

Firefox has completely different behavior. From my testing it will save POST resources as GET resources, and when they are restored, I receive a new fresh GET request from my test server. So, Firefox is NOT saving POST data for reuse upon tab recovery.

Conclusion

I'm not saying that the RFC doesn't matter because it does. Chrome should be changed to be more in line with it. Regardless of whether it is best practice for browser makers to persist this data or not, my argument about the unreliable exploitation of this problem stands because the only time this is a problem is when a website delivers a POST response without an immediate redirect, which all sites SHOULD be doing.

The only time this isn't the case is when the resource presenting the login HTML form is also the action of the form it contains (or some variant of this situation). Now *that* is a bad practice that remains bad no matter what browser you are using.

Test Methods

Browser versions I tested with:

  • Chrome: 30.0.1599.114
  • Firefox: 24.0

I created a simple PHP script (post.php) and hosted it on my HTTPS-capable test server:
<?php
echo "foo is: ".(isset($_POST['foo']) ? $_POST['foo'] : '<em>not present</em>')."<br/>";
echo "user input is: ".(isset($_POST['user']) ? $_POST['user'] : '<em>not present</em>')."<br/>";
echo "random number: ".mt_rand()."<br/>";
echo "request method: ".$_SERVER['REQUEST_METHOD']."<br/>";
?>
<form action="?" method="post">
<input type="hidden" name="foo" value="bar:<?= mt_rand() ?>" /><br/>
User input: <input type="text" name="user" value="" /><br/>
<button>Go</button>
</form>

I then submitted a request using this test page, saved the content outside the browser in a scratchpad, then force killed my local browsers and restored them. I repeated the tests both with HTTP and HTTPS to see the difference. The random numbers act as nonces so that I can immediately tell whether a request has been resent without looking in the logs (though I did watch the logs for verification)

Comment Exasperated sigh (Score 2) 135

All this article means is that Google has a bug to fix with regards to the post back response on the GMail login page.

Some in this discussion say things like:

along with the password and login.

from the article: "the login and password are not encrypted (see the red oval in the screenshot).

Let's be clear here. The only time that any browser's session restore feature would store your username and password is because it's part of the HTTP request itself. An HTTP request can be a GET or a POST. Good web developers never send sensitive information in a GET, nor should a GET actually _do_ anything other than get a resource. POST requests should be used for this instead, which do not convey the private information in the URL bar. Now, POST requests are not inherently more secure than GETs, and the data they pass is actually in the same format as the data in URL-based GET requests, and just as visible to observers on the network. I'm just getting some of the technical background set for all of the (clearly *very*) technical people who read Slashdot these days.

The "problem" is that older versions of Safari stores the details of POST requests "unencrypted". Which is fine because any encryption is meaningless if decryption can be done without user intervention. You would not encrypt some file and then place the key next to it and then store it that way would you? Nonetheless, this is exactly what happens when the encryption key is stored on the same volume as the encrypted file. If it can be found by software then it can be found by an attacker.

The really funny part is that this looks like the page Safari saved was a GMail login attempt which was unsuccessful (duh? the credentials are clearly not real). Google is doing authentication (almost) the right way here fellas, the only time that using GMail involves having your plaintext credentials in a POST request is during login and login alone. From there on out it's just record keeping (valid session list on the server, and a known session ID in the hands of each client).

You can see this for yourself by opening up an Incognito session and going to gmail.com, and typing fake credentials (i don't know, "kaspersky_user" and "kaspersky_pass" come to mind for some reason), then on the "Invalid login" page, hit refresh. You will be prompted by your browser to resend the login credentials. There! You see that the browser _still has_ your login credentials from before! AMAZING! Did you know that if you leave that tab untouched for three days your creds will still be resent again when you finally refresh??

Safari like any other browser with a restore session feature, saves that POST data. Because it's just POST data to the browser. And if you are sitting on a page that is itself the result of a POST request, the browser MUST save that data if it intends to give you the same page when you return... by both specification and convention. Google can fix this particular issue by redirecting their invalid logins back to GET requests like the rest of the civilized world. That protects their users account credentials from inadvertent storage on disk.

The more interesting part is how someone would think that this would be immediately dangerous, because had the user successfully logged in, they would not have been sitting on that POST request would they! That's right the credentials would have long vanished and would never have made their way to disk. The only time this would happen in practicality is when the credentials were NOT correct. Don't get me wrong, that can still be valuable to an attacker, but its a Google bug not a Safari one anyway. Nonetheless in GMail's situation, this is a rare and definitely not reliably exploitable issue from the perspective of potential attackers.

I cannot stress enough that credential leaks within a session backing file on a hard disk is only a problem when the site itself messes up or is written improperly and/or insecurely. And on the local side: If you don't trust the programs running on your user account, or you are not correctly using the multi-user capabilities of your operating system, you can't really trust anything you do on it, and you can't really complain either. Because the industry already gave you the tools to protect yourself and you just didn't use them because you were lazy.

Comment Re:Extended Support Release (Score 1) 366

You can, for Chrome there is the beta and dev channels. I use beta chrome which I rarely have issues with. Dev chrome is very unreliable, and it should be, since it's the latest and least tested revision of the software. For IE- well no there is only a preview for IE10 which isn't a full browser (at least last I checked) so your right there.

Comment Re:Extended Support Release (Score 1) 366

I feel your pain- Mozilla is definitely not used to this sort of release model - and yes, the amount of negativity over it is indicating that. But it's growing pains -- the whole point of having major versions is so you can progressively move bugfixes and features from the latest and most unstable version up to the stable as those become proven. I guess Mozilla's just doing a shitty job of it right now.

Comment Re:Extended Support Release (Score 1) 366

I am definitely not speaking to Java/Flash breaking -- and yes they *are* real web technologies. They are not GOOD web technologies, but they are still necessary. But are you saying that every time Mozilla releases a Firefox, everyone who uses Java or Flash have to go and do real work to get it working again? No, your saying that there's a chance stuff like this could break- which is Mozilla's responsibility, not ours as web developers. If it's obvious that Mozilla is going to fix the problem - eventually - then tell your users to use a different browser until then if they want it to work. A lot of developers seem to think that their website MUST WORK at every second of every day on every browser version that someone could possibly ever try. Just write to the standards, make exceptions when necessary, and when a bug happens, don't sweat it that much- just educate your users!

Comment Re:Extended Support Release (Score 2) 366

It should not! They are all supposed to implement the same HTML! If you write your website *properly*, staying as close to the standards as you can, it will show up almost perfectly on all the latest versions of the major browsers. That's simply true. Javascript frameworks which deal with IE-specific ways of doing things are still necessary unfortunately, using one like jQuery means you should be able to upgrade the library without needing to rewrite parts of your site as IE evolves to become more standards-compliant. Use browser sniffing for HTML though, only when absolutely necessary. Perhaps due to bugs, improper/missing implementation of the standards, etc. Don't just jelly on user sniffing so you can target whatever browserX-only features you want. Besides, each time you branch your site your adding that much identical maintenance work, that should be obvious.

Comment Re:It's a madness (Score 1) 366

Both Chrome and Firefox use hardware acceleration to render pages using all the horsepower of your video cards. If anyone is to blame out of the three parties you listed, it *IS* ATI. That being said, a blue screen could come from any driver in your system, conflicts between drivers, or from other parts of Windows. Any of those could be the culprit. As for why it happens in Firefox and not Chrome, there are 2 possibilities. For one, Chrome may implement it's acceleration in an entirely different fashion (and probably does). As it turns out, there are many ways to solve a problem especially when talking about OpenGL. The other possibility is that Google found out about the issue (either through direct testing or feedback) and blacklisted your drivers, so that hardware acceleration is not used.

Believe me, you *want* hardware acceleration. If Chrome is not using it because of your bad drivers, you are probably missing out on a better Web.

Also, when most well-informed people make a complaint such as yours, they often mention that they performed one (or several) driver updates to attempt to remedy the problem. You do know that you can upgrade your graphics drivers right? Have you tried this?

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...