Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
The Internet Communications Networking

Most Alarming: IETF Draft Proposes "Trusted Proxy" In HTTP/2.0 177

Lauren Weinstein writes "You'd think that with so many concerns these days about whether the likes of AT&T, Verizon, and other telecom companies can be trusted not to turn our data over to third parties whom we haven't authorized, that a plan to formalize a mechanism for ISP and other 'man-in-the-middle' snooping would be laughed off the Net. But apparently the authors of IETF (Internet Engineering Task Force) Internet-Draft 'Explicit Trusted Proxy in HTTP/2.0' (14 Feb 2014) haven't gotten the message. What they propose for the new HTTP/2.0 protocol is nothing short of officially sanctioned snooping."
This discussion has been archived. No new comments can be posted.

Most Alarming: IETF Draft Proposes "Trusted Proxy" In HTTP/2.0

Comments Filter:
  • Well for one... (Score:5, Insightful)

    by Junta (36770) on Sunday February 23, 2014 @11:51AM (#46316039)

    Pretty much anyone can submit an IETF RFC if they really want. The existence of a draft does not guarantee a ratified version will exist someday.

    For another, it could be much worse. There is explicit wording at least here about seeking consent from the user and allowing opt-out even in the 'captive' case, as well as notifying the actual webserver of this intermediary, and that the intermediary must use a particular keyusage field meaning that some trusted CA has explicitly approved it (of course, the CA model is pretty horribly ill-suited for internet scale security, but better than nothing). Remember how Nokia confessed they silently and without consent had their mobile browser hijack and proxy https traffic without explicitly telling the user or server? While something like this being formalized wouldn't prevent such a trick, it would be very hard to defend a secretive approach in the face of this sort of standard being in the wild.

    Keep in mind that in a large number of cases in mobile, the carriers are handing people the device including the browser they'll be using. A carrier could do what Nokia admits to in many cases without the user being the wiser and claim the secretive aspect is just a side effect today. If there was a standard clearly laying out that a carrier or mobile manufacturer should behave a certain way, that defense would go away.

    I would always elect the 'opt out' myself, but I'd prefer anything seeking to proxy secure traffic be steered toward doing things on the up and up rather than pretending no one will do it and leaving the door open for ambiguous intentions.

    • by WaffleMonster (969671) on Monday February 24, 2014 @12:35AM (#46320837)

      Remember how Nokia confessed they silently and without consent had their mobile browser hijack and proxy https traffic without explicitly telling the user or server? While something like this being formalized wouldn't prevent such a trick, it would be very hard to defend a secretive approach in the face of this sort of standard being in the wild.

      Except the consumer of this ID would be anyone with a browser which is potentially billions of people. The only question that matters in my opinion is can you explain the concept of "trusted proxy of untrustworthy content" to an average person (e.g. cookie baking oracle) ... if not essentially you are asking the user to provide an answer to a question they don't understand. A stupid and pointless question I might add.

      If there was a standard clearly laying out that a carrier or mobile manufacturer should behave a certain way, that defense would go away.

      Providing legal cover for illegitimate behavior I suspect is the whole point. See the user said it was ok (Even though they have no fucking clue) so now we have legal cover to continue with our bullshit without fear of retribution.

      It is not the fault of those who originally wrote the SMTP, HTTP W3C standards that their shit is constantly abused to screw over and scam millions. This was all done in an era of implicit trust and technical sophistication.

      It is however our fault for decades later continuing to allow this shit that passes for basic communications today to be so easily coopted by scum. This ID does nothing to fix anything... It just pours more fuel on the fire by asking users TOTALLY USELESS questions they are incapable of answering.

  • by cardpuncher (713057) on Sunday February 23, 2014 @11:53AM (#46316051)
    But as I read it, the issue seems to arise from the fact that HTTP2 will permit TLS to be used with both http: and https: URLs. If it is used for http: URLs, then existing proxy and caching mechanisms will simply break. I think this is a proposal for "trused proxies" to be permitted where an http: URL is in use and TLS is also employed, I don't think it's proposed that this should apply to https: URLs.

    In other words, it doesn't make things any worse than the current situation (where http: URLS are retrieved in plain text all the time) and does permit the user to control whether they want some protection against interception or potentially better performance. And it doesn't appear to change the situation for https: at all.

    Or that's how it appears to me.
    • by WaffleMonster (969671) on Monday February 24, 2014 @12:08AM (#46320661)

      But as I read it, the issue seems to arise from the fact that HTTP2 will permit TLS to be used with both http: and https: URLs. If it is used for http: URLs, then existing proxy and caching mechanisms will simply break. I think this is a

      My understanding using TLS for HTTP via "HTTP2" is accomplished via *untrusted* opportunistic encryption. Nothing breaks if your operating a proxy supporting HTTP2. Proxy would simply terminate the encryption from the client and setup a separate equally useless "encrypted" channel to the server. The proxy would act as a middle man.

      proposal for "trused proxies" to be permitted where an http: URL is in use and TLS is also employed, I don't think it's proposed that this should apply to https: URLs.

      Basically what they are proposing is to provide a "trusted proxy" for completely untrustworthy http transactions. How is this not an oxymoron? What is the security value? Value to the user? Who benefits?

      It seems all this does is add more complexity while accomplishing nothing. And about consent good luck explaining "secure proxy of insecure data" doublespeak to an average human being who has better things to do with their time than read IETF ID's... more likely this will only confuse the hell out of people causing them to assume things about the content they are consuming which are false.

  • The current solution (Score:2, Informative)

    by Anonymous Coward on Sunday February 23, 2014 @11:54AM (#46316055)

    If you want to do this now, you're typically in one of two situations:

    You need to proxy the traffic for all users of a company, in order to filter NSFW content and to scan for viruses and other malware. In this case you add your own CA to all company computers. Then you MITM all SSL connections. This doesn't work for certain applications which use built-in lists of acceptable CAs, but mostly the users will be none the wiser.

    The other situation is that you want a reverse proxy in front of your hosting infrastructure. In this case you just have the proxy operator install your certificate and make it look like the proxy is your actual server.

    In both cases, the Trusted Proxy extension would make more transparent what's actually going on, instead of pretending that there is no proxy when in fact there is.

  • by MobyDisk (75490) on Sunday February 23, 2014 @11:56AM (#46316069) Homepage

    My employer uses a MITM HTTPS proxy. The IT department pushed down a trusted corporate certificate, and most people don't even know their HTTPS connections aren't secure any more. The real problem is when some application, other than a browser, needs internet access and it fails. This includ sethings like web installers that download the app during installation, automatic update systems, secure file transfer software, or things that call home to confirm a license key. On occassion a developer curses some installer for not working, then we inspect the install.log file and find something about a certificate failure.

    IT departments forget that HTTPS is used for more than just browsing the web.

    • by timeOday (582209) on Sunday February 23, 2014 @12:15PM (#46316173)
      Same at my company, but I take issue with "people don't even know their HTTPS connections aren't secure any more". Corporate machines are "rooted" in the first place, they generally install whatever new software the employer wants during each reboot or login. Probably half the cycles on my work computer are wasted on Symantec spyware. So, you can't lose the privacy you never had.
    • by smash (1351) on Sunday February 23, 2014 @12:22PM (#46316195) Homepage Journal

      IT departments forget that HTTPS is used for more than just browsing the web.

      No, not necessarily. Some IT departments are just more paranoid than others about letting un-filtered https go through the firewall, due to the new generation of malware which is typically doing C&C over HTTPs to thousands of randomly generated and not blacklisted URLs.

      You have a choice - you MITM/inspect HTTPs, you allow only whitelisted HTTPs connections (which is not really practical due to the ever changing whitelist), or you allow any and all malware C&C straight through the corporate firewall. Option 3 is not really acceptable.

    • Those applications are broken. If they fail to respect the OS proxy and CA settings they are the ones at fault.

      In a corp environment nothing should be calling home ever, that is what they made licences servers for.
      Updates should be gotten from an update server, ya know something that IT approves.
      Installers calling home again should never happen.
      Post SOX/HIPPA there is no secure file transfer your IT dept has a legal requirement to look and record things coming in and out the door.

      • by drolli (522659) on Sunday February 23, 2014 @01:41PM (#46316751) Journal

        That depends on what the purpose of this application is. There are purposes for which you may prefer an application failing instead of accepting another certificate. If the application promised end-to-end safety, with a very specific *certified* configuration ending on the target (i imagine Software updates for the development of embedded systems in cars), then failure by default is the right behaviour until sombody signs of a sheet of paper that he/she/the company takes responsibility to the end customer (e.g. the development department) for anything transmitted in the wrong way.

      • by MobyDisk (75490) on Monday February 24, 2014 @11:48AM (#46323615) Homepage

        Post SOX/HIPPA there is no secure file transfer your IT dept has a legal requirement to look and record things coming in and out the door.

        Ironically, HIPAA requires that they NOT recording things coming in and out of the door. Yay for regulation!

      • by MobyDisk (75490) on Monday February 24, 2014 @11:52AM (#46323659) Homepage

        In a corp environment nothing should be calling home ever, that is what they made licences servers for.
        Updates should be gotten from an update server, ya know something that IT approves.
        Installers calling home again should never happen.

        Thinking like that is why everyone hates IT departments. You are saying that applications should be designed to support the IT departments way of doing things. In reality, lots and lots of apps call home and perform their own licensing. There's nothing wrong with that except that it interferes with the IT departments "vision" of perfect control.

        Post SOX/HIPPA there is no secure file transfer your IT dept has a legal requirement to look and record things coming in and out the door.

        Actually, HIPAA states the exact opposite. That is why our company has specific file transfer rules in place to prevent snooping. If our IT department intercepted that they would be out of compliance with our own policies!

    • by Richard_J_N (631241) on Sunday February 23, 2014 @06:35PM (#46318735)

      As a website operator, I want to know if my content is being MITMd en route to the user. I know about the SSL fingerprint trick that lets a really technical user discover proxying, but I want to automate this process server-side, and stick up a big banner to say "Your employer is snooping on this connection, please log in from a trusted machine" (and then I'll prevent the user from logging in).

      • by WaffleMonster (969671) on Monday February 24, 2014 @12:43AM (#46320875)

        As a website operator, I want to know if my content is being MITMd en route to the user. I know about the SSL fingerprint trick that lets a really technical user discover proxying, but I want to automate this process server-side, and stick up a big banner to say "Your employer is snooping on this connection, please log in from a trusted machine" (and then I'll prevent the user from logging in).

        What you just wrote makes about as much sense as: "My Internet is currently down so I'm sending a nasty e-mail to my ISP demanding they fix the problem."

        • by Richard_J_N (631241) on Monday February 24, 2014 @10:33AM (#46322905)

          Why? If the connection is being MITMd, then both sides need to be able to figure this out.
          There was a long discussion on this (regrettably rejected by the browser vendor) to allow the SSL fingerprint to be obtained in JS. That would make it reasonably easy for the site operator to verify that the SSL cert hadn't been tampered with. (Of course, a really evil proxy can scan for the JS, but that game of whack-a-mole is usually easier for the good guys to win, at least sometimes).

          • by WaffleMonster (969671) on Monday February 24, 2014 @12:22PM (#46323947)

            Why? If the connection is being MITMd, then both sides need to be able to figure this out.

            You answer your own question in the next paragraph.

            You have a compromised communication channel and you are making decisions based on content of data communicated over that channel. It's broken so lets use it anyway and hope for the best.

            There was a long discussion on this (regrettably rejected by the browser vendor) to allow the SSL fingerprint to be obtained in JS. That would make it reasonably easy for the site operator to verify that the SSL cert hadn't been tampered with. (Of course, a really evil proxy can scan for the JS, but that game of whack-a-mole is usually easier for the good guys to win, at least sometimes).

            If you want servers to validate clients use client certificates or TLS-SRP to log-on to a site. All MITM countermeasures need to be cryptographically bound to session encryption or they are useless. "whack-a-mole" scenarios do not prove security and security without meaningful trust is an illusion.

    • by steelfood (895457) on Monday February 24, 2014 @12:27AM (#46320791)

      The IT department didn't forget. The higher ups never knew, never bothered to find out, and never was interested in the answer anyway.

  • by SuricouRaven (1897204) on Sunday February 23, 2014 @11:57AM (#46316081)

    It's already quite easy to add a * certificate to a browser to allow a proxy to intercept SSL. This is a standard practice in many LANs to allow the web filter to work on SSL pages - otherwise it'd be impossible to perform more than the most basic DNS/IP filtering on HTTPS sites, which would let a *lot* of undesired content through - google images alone would be quite the pornucopia.

    All this proposal does is formalise the mechanism that people are already widely using. The end user still needs to explicitly authorise the proxy, no different than adding a * certificate today - and that's something so common, Windows lets you do it via group policy. The author's big fear seems to be that ISPs could start blocking everything unless the user authorises their proxy - and they could do that already, just be blocking everything unless the user authorises their * certificate!

    And either way, they won't. For reasons of simple practicality. Sure, they could make the proxy authroisation process easy by giving a little 'config for dummies' executable. Easily done. Now repeat the same for the user's family with their three mobile phones (One android, one iOS, one blackberry), two games consoles, IP-connected streaming TV, the kid's PSP and DS (Or successor products), the tablet and the internet-connected burgler alarm. All of which will be using HTTP of some form to communicate with servers somewhere, and half of them over HTTPS, with the proportion shooting *way* up if HTTP/2.0 catches on.

    • by x0ra (1249540) on Sunday February 23, 2014 @03:51PM (#46317617)
      The NSA, GCHQ and other acronym agency are already spying everybody, so let's just formalize that even more. HTTPS MITM very basics is wrong, formalized or not.
    • by tlambert (566799) on Sunday February 23, 2014 @05:13PM (#46318137)

      It's already quite easy to add a * certificate to a browser to allow a proxy to intercept SSL. This is a standard practice in many LANs to allow the web filter to work on SSL pages - otherwise it'd be impossible to perform more than the most basic DNS/IP filtering on HTTPS sites, which would let a *lot* of undesired content through - google images alone would be quite the pornucopia.

      So, if I understand you correctly, this proposal does nothing which it is not already possible to do, and should therefore be discarded...

      • by SuricouRaven (1897204) on Sunday February 23, 2014 @06:37PM (#46318749)

        Sort of. Nothing that isn't possible to do right now. But the MITM-via-trusted-cert isn't the tidiest approach. It's an administrative headache - every OS has its own method for adding a trusted cert, and some don't permit it at all, and it doesn't allow clients to validate the server's certificate if the proxy doesn't accept it. I'm not sure quite what this proposal is, but it appears to be something to build in properly from the beginning support for trusted cert interception so it won't be such an inconvenience.

    • by dbIII (701233) on Monday February 24, 2014 @12:13AM (#46320691)

      otherwise it'd be impossible to perform more than the most basic DNS/IP filtering on HTTPS sites, which would let a *lot* of undesired content through

      So? That's not really enough of an excuse to record what the users have flagged as purely private conversations by using https in the first place. IMHO it's just as immoral as installing cameras in motel bedrooms and bathrooms to make sure the guests don't get up to any private acts that the motels terms of service forbids.

      If I was in possession of one of these stupid SSL accelerator boxes, and the police decided to look at it and found details of online banking for users personal accounts then there's a pretty good case to send me to jail, even if site policy forbids users from personal web access. Even if you don't look at the stuff the fact that it is recording the information, even just for the purpose of caching, and that it can be read by those with access to the appliance is enough to crash up against the spirit of plenty of laws if not the actual wording.

  • Misleading summary (Score:2, Interesting)

    by claytongulick (725397) on Sunday February 23, 2014 @12:03PM (#46316121) Homepage

    From the *actual* draft:

    This document describes two alternative methods for an user-agent to
                automatically discover and for an user to provide consent for a
                Trusted Proxy to be securely involved when he or she is requesting an
                HTTP URI resource over HTTP2 with TLS. The consent is supposed to be
                per network access. The draft also describes the role of the Trusted
                Proxy in helping the user to fetch HTTP URIs resource when the user
                has provided consent to the Trusted Proxy to be involved.

    The entire draft is oriented around user consent and transparency to the user... where is the problem here?

    The linked article by Lauren Weinstein is very heavy on sarcasm, scorn and flippant one-liners, but pretty light on technical details. From what I can discern, her primary concern is that ISP's will force all of their users to consent to them acting as a trusted proxy or refuse to serve them.

    This is pretty far fetched, imho. First of all, the backlash from the average consumer would be staggering. If, every time they go to their bank's web page, they get a scary security notice "do you want to allow an intermediary at "trustedproxy.verizon.com" to see your private data?" they answer, every time, will be "hell no". And if they are then unable to access their bank account because of this... well, that's not going to be a pretty picture for L1 support.

    Second, the *last* thing most ISPs want is to have to deal with yet more PCI concerns. If they end up storing your cc number and ssn in a plain-text cache, that introduces all sorts of potential problems for them.

    It seems like the primary use case for this technology is in serving media-heavy content that SSL screws up, like streaming video over ssl etc... so, it would allow caching etc for various media streams that really don't need SSL. And the user could make the decision for whether they want to do it or not.

    This seems like a pretty smart thing to me, I'm not sure what all the hand-wringing is about. Maybe I'm missing something obvious?

  • by turkeyfish (950384) on Sunday February 23, 2014 @12:14PM (#46316167)

    What is going to happen to all those secure credit card transactions that are the life-blood of internet commerce, when third parties figure out how to decrypt packets en-route by infiltrating the procedures of ISP's and alter them to "achieve efficiencies"?

    You would think capitalists have a lot to loose if this proposal goes forward.

    • by smash (1351) on Sunday February 23, 2014 @12:24PM (#46316197) Homepage Journal
      You mean playing man-in-the-middle with your HTTPS? It's already been going on for years.
      • by TheGratefulNet (143330) on Sunday February 23, 2014 @02:30PM (#46317161)

        if I didn't install the OS and I'm inside a corp LAN, I assume the worthless little 'lock' icon doesn't mean shit anymore.

        I would use my own laptop and my own purchased and installed VPN.

        these days, if you are in corp LAN, you have to assume you are being logged and traffic sniffed. this isn't 10 yrs ago when it was new and hot to do this; I would assume any company bigger than 10 people have this 'proxy' shit going on (mitm ssl).

        and about 10 yrs ago, I had an interview at bluecoat when I was informed by a manager there that they were SO PROUD of the sniffing and fake certs they make users accept (crafted to look very much like 'real' ones) and that the lock icon is worthless from now on. I didn't take the job (it was too creepy) but that was a huge eye-opening for me. I did post about it and got lots of disbelief. well, NOW there isn't so much disbelief anymore. turns out I was right (or rather, BC was right when they showed me this demo at the interview).

    • by Rick Zeman (15628) on Sunday February 23, 2014 @12:25PM (#46316209)

      What is going to happen to all those secure credit card transactions that are the life-blood of internet commerce, when third parties figure out how to decrypt packets en-route by infiltrating the procedures of ISP's and alter them to "achieve efficiencies"?

      You would think capitalists have a lot to loose if this proposal goes forward.

      No kidding. Every day brings more and more proof that the bad guys are smarter (or at least way more motivated) than the good guys.

    • by UnderCoverPenguin (1001627) on Sunday February 23, 2014 @07:16PM (#46319031)

      Valid point.

      Originally, SSL/TLS and HTTPS were developped and deployed to provide pprotection for this small amount of snesitive data.

      Now, for various reasons, we have HTTPS protect pages that contain a lot of "rich" content that actually doesn't need this protection. This has the side affect of creating a lot of extra, uncachable content. I can understand why ISPs would want a way handle that.

      So, is there a way to securely protect the sensitive stuff while leaving the rest unencrypted? Perhaps the non-sensitive stuff could be validated* with secure hashes, so could then be cached without need to decrypt anything?

      *As I understand, one of the current problems with mixing HTTPS and non-HTTPS content on the same page is that the non-secure content can affect how the secure content is handled.

  • by redshirt (95023) on Sunday February 23, 2014 @12:41PM (#46316301)
    Is that Section 7, "Privacy Considerations," has no content.
  • Call me old school but transparent interception of https does not increase my feeling of safety. It breaks the net and any security I might imagine in a transaction. This technology will make it really easy for anyone to do what for example Microsoft does to Skype connections (which is why Skype isn't allowed in my company). It provides for any number of decryption points to be created between you and your bank or whatever. The doc suggests that it can be used for both anonymization and deep inspection, positing that both are "good". I think it depends on who the user is whether one is desirable or not. As for a company pushing corporate certificates down its users' throats without them knowing it, I think this is pretty dangerous. The Internet is such a pervasive part of life now that if not informed, a user has a reasonable expectation that his or her communications will not be intercepted and possibly reformulated. It is like an operator listening to your conversation and being able to interject words into the conversation that you both think the other has said. Perhaps some people who don't remember a time when there was no social media don't get it. However I think a company should trust its employees and not intercept communications leaving the company, it is despicable immoral and weakens human dignity.

    If there are such overarching security issues like multimillion dollar contracts or secret plans that are worth alienating your workforce, then you should tell them and also install other demeaning but powerful security technology like biometrics, laser fields, strip searches, etc. The idea that some guys sat down to write this document and imagined that the "good" uses of this would not be massively overshadowed by the horrible uses of it is just so appalling it nauseated me to read it.

    Yes this sort of thing is going on now. But no, I don't think it is a good direction for society, I am not talking about national security forces but about corporations who will find plenty of reasons to implement this, so that while the desired "responsibility to management" i.e. load balancing, security monitoring, whatever is performed, there will become much more generally available back doors into any available communication ready waiting for someone who thinks it might be neat to open the door. The technology works regardless of whether there is a court order or anyone responsible in the vicinity. You may think I am paranoid but I think it is one thing when the police need wiretapping to catch mobsters. (I doubt they would catch any terrorists that way but who knows.) But it is another thing when the campus police, the kindergarten babysitter, every tom dick and harry with a web/phone/video startup is going to see this as a fresh new playing field. If they want to outlaw ssl fine. But I don't want to be using ssl and not know if it really is working or not because my ISP or phone company or cable company feels a need to be a man in the middle. Must the net be infinitely porous? They just can't leave shiny toys alone.

  • by dackroyd (468778) on Sunday February 23, 2014 @01:02PM (#46316449) Homepage

    The author who says that this is 'most alarming' is missing one key thing; sometimes people use computers that belong to someone else.

    Any company that needs it's employees to be able to use the internet, but also want to be able to detect any employee that is sending documents via the internet to outside of the company would love to use this, as well as have every permission to install this on their own computers. They could then have the employees computers trust the SSL proxy, and it could easily detect any documents being transmitted.

    Poul-Henning Kamp covers this at the end of his talk at http://www.infoq.com/presentat... [infoq.com] from 14:40 .

    • by tlambert (566799) on Sunday February 23, 2014 @05:18PM (#46318171)

      The author who says that this is 'most alarming' is missing one key thing; sometimes people use computers that belong to someone else.

      Any company that needs it's employees to be able to use the internet, but also want to be able to detect any employee that is sending documents via the internet to outside of the company would love to use this, as well as have every permission to install this on their own computers.

      Alternately, they put in a transparent https proxy, and sign a trust certificate for the proxy, and install the cert on all the corporate computers. Attempts to access port 443 from interior computers which do not already have the cert installed are redirected to a download page for the cert, and have a one-time "opt in". Making the proposal totally unnecessary for this use case.

  • by LostMyBeaver (1226054) on Sunday February 23, 2014 @01:16PM (#46316569)
    While the article justifiably blows a whistle on what could be an abuse or power, the premise of the article is BS at best. It suggests that the tech could be used to maliciously snoop on people without their knowledge. The spec says nothing of the sort. It allows a user to make use of a proxy. In the case of a TLS only HTTP 2.0, this is needed. Without it, people like myself would have to setup VPNs for management of infrastructure. I can instead make a web based authenticated proxy server which would permit me to manage servers and networks in a secure VPN environment where end to end access is not possible.

    Additional benefits of the tech will be to create outgoing load balanced for traffic which add additional security.

    How about protecting users privacy by using this tech. If HTTPv2 is any good for security, deep packet inspection will not be possible and as a result all endpoint security would have to exist at the endpoint. Porn filters for kids? Anti-virus for corporations? Popup blockers?

    How about letting the user make use of technology like antivirus on their own local machine to improve their experience? How many people on slashdot use popup blockers which work as proxies on the same machine.

    This tech adds to their security end-to-end instead. After all, it allows a user to explicitly define a man-in-the-middle to explicitly trust applications and appliances in the middle to improve their experience.

    What about technology like Opera mini which cuts phone bills drastically or improves performance by reducing page size in the middle.

    Could the tech be used maliciously? To a limited extent... Yes. But it is far more secure than not having such a standard and still using these features. By standardizing a means to explicitly define trusted proxy servers, it mitigates the threat of having to use untrusted ones.

    Where does it become a problem? It'll be an issue when you buy a phone/device from a vendor who has pre-installed a trusted proxy on your behalf. It can also be an issue if the company you work for pushes out a trusted proxy via group policy that now is able to decrypt more than what it should.

    I haven't read the spec entirely, but I would hope that banks and enterprises will be able to flag traffic as "do not proxy" explicitly so that endpoints will know to not trust proxies with that information.

    Oh... And as for tracking as the writer suggests... While we can't snoop the content, tools like WCCP, NetFlow, NBAR (all Cisco flavors) as well as transparent firewalls and more can already log all URLs and usage patterns without needing to decrypt.

    So... May I be so kind as to simply say "This person is full of shit" and move on from there?
    • by Anonymous Coward on Sunday February 23, 2014 @04:55PM (#46318013)

      This tech adds to their security end-to-end instead. After all, it allows a user to explicitly define a man-in-the-middle to explicitly trust applications and appliances in the middle to improve their experience.

      I think you need to re-examine your use of the word "security" and "end-to-end".

      This does precisely the opposite of what you said, to achieve the aim you stated.

      "This tech reduces their security end-to-end, to improve their experience" is what it does. I admit, it has the potential to improve their experience, if cached content is more important that secure content. But it can only *reduce* security end-to-end. There is no possibility whatsoever that it could ever maybe slightly increase security. It can only possibly improve their experience, as long as that experience is wholly devoted to page-load-times due to cached content and content compression.

      If their "experience" is ever tainted by things such as, information leak or third party malware injections, then this technology can only ever reduce security, since there is an additional place to target for such things that never existed before.

  • by the eric conspiracy (20178) on Sunday February 23, 2014 @02:53PM (#46317335)

    It seems to me this is just an attempt to standardize what people are already doing with fakey hackish methods involving bogus certs etc.

  • One of the benefits of using HTTPS currently is that it avoids broken proxies. There are all sorts of implementations that claim to support HTTP 1.1, but don't support 100 Continue, content negotiation, or other important features you might need to use. If you use HTTPS, it currently avoids all the breakage (unless the destination server itself is actually broken). Besides the security issues inherent in this model, you have to worry about all the cases in which somebody installed some broken proxy that doesn't actually implement half the standard, breaking all sorts of sites.

  • by matthewv789 (1803086) on Monday February 24, 2014 @01:27AM (#46321061)

    This is the same question as what to do with "HTTP" (not HTTPS) requests when transported over HTTP2 (which is supposed to be all TLS) and SPDY (which is already all TLS, and which HTTP2 is based on). Usually it's framed in the context of "do we need to authenticate and verify TLS certificates when the user didn't originally request HTTPS?"

    Some people are of the opinion that "TLS is TLS, and if you can't 100% trust it, there's no point." And I can see the logic in that. Obviously that should always be the case when you've explicitly requested an HTTPS connection, and ideally, at some point in the future, it would be nice to be the case for all network connections, all the time.

    But when you step back, you have to realize that those connections are currently completely unencrypted and untrusted - they're HTTP, not HTTPS. And that the march to encryption is slow. The majority of websites have no TLS encryption capability at all, maybe as many as 20% of the remainder are self-signed, and quite lot of the rest may have certs which don't match the domain being requested. (The same is no doubt true of apps, mobile or otherwise.) And the latter problem, particularly, is quite difficult to solve for technical reasons in a lot of cases critical to the orderly and economical operation of the internet, such as CDNs.

    This goes beyond the usual lament that sites will need to pay $100+ per year to get a cert - that's not really the problem, though from my experience most site owners will have to be dragged kicking and screaming before they bother to install a cert and get HTTPS running properly. Even if a cert is installed, most of them want to redirect back to HTTP at any opportunity.

    Besides performance, cost, and administrative hassle, the big problem is the royal pain that it can be to take care of all the issues of trusted certs across hosting providers, CDNs, lead generation partners, etc. That's because in a lot of cases, those providers are hosting assets under a variety of domains - sometimes hundreds or thousands of domains - on single shared servers (or many copies of shared servers), each with a single IP address shared among the various domains. It's shared hosting all over again, this time writ large across global CDNs and the like. Even with your own hosting provider, you might face the same problem on development and staging environments even if not on production, making testing difficult. And while they're working on the problem, so far HTTPS does not play well with shared hosting. (On top of that, a lot of ad networks don't support HTTPS at all, so they introduce the mixed content problem into your pages. If your site depends on ads, you might not be able to serve them over HTTPS connections, which is why some sites offer HTTPS only to paying customers.)

    The whole idea of SPDY or HTTP2 being "TLS-only" is laudable, to gain opportunistic encryption even when the user didn't request HTTPS. But by so thoroughly breaking sites with mixed content or untrusted certificates (either expired or self-signed or for the wrong hostname or whatever), I'm of the opinion that all it's doing is delaying the adoption of TLS for websites. Rather than going "oh well, to get HTTP2, we'll have to fix this", most sites, faced with the hassle and resulting broken pages, will drag their heels adding HTTPS or enabling HTTP2, forcing downgrades to HTTP 1 for many years to come.

    Encryption absolutists portray the question in simple terms: why would you not want to trust your encrypted connection? You'll be vulnerable to man in the middle attacks, therefore they should always be authenticated and verified. But the real question is: when users haven't specifically requested HTTPS, is it better to have those connections mostly be COMPLETELY unencrypted and untrusted (which are even more susceptible to MITM), but when they are encrypted to trust them (even if the user can't see that they're encrypted or trusted)? Or for a larger proportion of them to be encrypted, but not necessarily always trusted in the f

    • by matthewv789 (1803086) on Monday February 24, 2014 @01:34AM (#46321089)

      My apologies, the second to last paragraph should read "in order to use SPDY or HTTP2 even for "HTTP" requests"...

      The extra "HTTPS" is nonsensical in this context and should not be there.

    • by matthewv789 (1803086) on Monday February 24, 2014 @01:43AM (#46321125)
      Also, I could point out that requiring validation of TLS certificates for SPDY/HTTP2 prevents actual shared hosting from opportunistically encrypting all the zillions of sites they host, which would be trivial right now (chances are they DO have a certificate installed... in the ISP's name... but not for every site they host). While this wouldn't allow real trusted "HTTPS" connections, it would allow for a LOT of sites to suddenly be using encryption routinely without either the site owners or the end users even knowing it. All the hosting provider would need to do would be to enable SPDY, or later HTTP2, on their servers, and it would start opportunistically encrypting all the hosted sties using the hosting provider's certificate.
  • by upuv (1201447) on Monday February 24, 2014 @09:33AM (#46322467) Journal

    This is laughably a bad idea.

    This will be abused the instant it hits code. The temptation is too great. This will sink the adoption of http 2.0 and 1.1 will live for a far greater time.

    With all of the news around man in the middle attacks I just can't believe this will be a feature.

    This needs to be amended. I can see trusted chains, Where you would trust a chain from end to end, but just the proxy? With each node in the chain being able to cache.

1 Mole = 25 Cagey Bees

Working...