Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:Yeah but... (Score 4, Interesting) 183

I have the perfect comeback to those ignorant fucks..."Are YOU gonna accept responsibility and pay for any and all damages if your site serves malware? No? Then you are knowingly aiding and abetting malware vendors, kindly fuck off".

If they want to be treated like legitimate businesses? Then they have to accept the responsibility legitimate businesses have. If a business doesn't secure their premises and cause harm to their patrons? They are responsible for the clean up, look at the mounds of money TJ Maxx and Target had to pay for their lack of security, but these websites want us to treat them as legitimate businesses show the same lack of responsibility as some fly by night topsite? Sorry can't have your cake and eat it too, either you have the same responsibilities as a real business or you don't deserve any more consideration than a cracksite or any other dodgy place on the wild web.

Comment Re:https "evRywhr" is 4 sites, not so much, Users. (Score 1) 44

> hosts file or client-side tracking blocker extension works for HTTPS
> just as well as for cleartext HTTP.
---
You can't use a hosts file to selectively block content. I've already stated, that to cache or to block, you need to know the object-type and size. You don't get that w/HTTPS.

> There are anecdotal reports that HTTP/2 over TLS can have less latency
> than cleartext HTTP/1.1. So if you add HTTP/2 to your MITM, you may be
> able to mitigate some of the TLS overhead.
---
Interesting, but it would be highly dependent on type of traffic. HTTP/2 was supposed to help response time by combining multiple requests, including allowing for combining requests from divers sources, so it would be unsurprising if it worked under some traffic loads. This is especially true compared to uncached cleartext.

However, I doubt HTTP2 proponents would be interested in doing benchmarks where 33% of the cleartext HTTP requests had 0 latency due to being locally cached.

Maybe it goes w/o saying, but combining the requests is the opposite of what would be necessary to block or locally cache 33% of the content.

Comment Re:https "evRywhr" is 4 sites, not so much, Users. (Score 1) 44

> That's true only if your ISP is using an intercepting proxy.
---
Right -- they are a large corporation. You don't think they couldn't be ordered to do so and say nothing under the Patriot act? Do you disbelieve that root-ca's in the US or other monitoring countries couldn't be forced to give out subordinated CA's to install @ ISP monitoring sites?

> Blocking "by site" is still possible with HTTPS...blocking at a finer level than "by
> site" or "intermediate caching" still requires MITM.

I've always blocked by site and media type and for any unclarities, I looked at the http code. That's no longer possible unless a user sets up MITM proxying that
lowers security for all https sites (finance, et al.). While I can install exceptions to
whitelist sites that shouldn't have content cached, they are still decrypted.

One has to know content type and size to effectively cache anything. Right now, going back for the past 3500 requests, I see stats of:
(mem = in-squid-memory)
mem: 8% (313/3514), 16% (11M/70M)
dsk: 23% (842/3514), 10% (7.2M/70M)
tot: 32% (1155/3514), 26% (19M/70M)
& for double that:
mem: 5% (367/7025), 9% (12M/126M)
dsk: 21% (1523/7025), 14% (18M/126M)
tot: 26% (1890/7025), 23% (29M/126M)
---
without MITM caching, those numbers drop to near 0 for HTTPS sites. Those cached objects serve for multiple browsers, OS's, machines and users. Losing ability to cache 25-30% hurts interactive use and raises latency. Simply by going w/HTTPS instead of HTTP creates increased server load and increased network latencies. Sites that provide many static images can be affected more heavily. But my local network cache provides 128G of space (55% used) and can store large iso images that can be reserved months later. W/my monthly traffic, 25% space savings can easily run in the 500G range which is, by itself,
well over many ISP imposed limits before extra charges kick in.

> Intercepting proxies cache HTTPS only if the user has chosen to trust the proxy.
----
Which is why converting most traffic to HTTPS instead of HTTP hurts caching proxies the most and allow easier tracking by sites like google. From the time I connect to some sites, till I leave, google, et al, have encrypted connections
going. They can easily track sites and where I'm at on the site, w/o doing any special MITM interceptions using fed-provided CA's from US-based CA-authorities.

My interest has been in promoting faster browsing experience (something I've had success in, given feedback from those using the MITM proxies), as well as increasing privacy by blocking sites based on what sites they are being called or referenced from. You can't do that if the site you are connecting to is HTTPS based.

I see no benefit for HTTPS for "normal usage" -- only harm for the user and benefits for the sites -- especially large, data collection sites like google.

Comment Re:https "everywhere" is 4 websites, not so much U (Score 1) 44

cleartext HTTP .. there are no routers on the path that aren't capable of playing MITM. What do I care if they "see" what kernel version I download or open source project I download. Who cares if they see the articles I am reading/writing on slashdot.

There is no improvement as google knows all the traffic as it tied into almost every site and HTTPS doesn't help a bit. And they in turn can hand the info over to any gov agency that asks for it -- and be forced not to tell you about it.

HTTPS is a wet-security blanket.

public key pinning? No -- you can intercept the traffic at the ISP level -- I'm sure larger ISP's can get a root-cert. When you connect to an encrypted site, you really connect to your ISP's pass-through traffic decoder, which then passes another encrypted circuit on to wherever you were going.

HTTPS safety is an "illusion" to get you to use it so you can't easily be selective about what you block or cache by site.

Caching rate on HTTP sites -- 10-30 or higher %, on HTTPS -- 0%, and there's the overhead of encrypting.

Comment Re:https "everywhere" is 4 websites, not so much u (Score 1) 44

That's what I meant by https "everywhere" harming security for those sites that have a legitimate need for it. By implementing a MITM proxy, it makes all https streams less secure. I don't like that trade-off (not that I don't already have such
implemented for myself).

At the same time, google is pushing for "certificate transparency"
(https://www.certificate-transparency.org/what-is-ct ), that might not let home-user issued certs be used for such purposes --- not sure. The more internal-proxies that implement MITM HTTPS for their internal needs/wants, the more pressure
those not wanting those streams to be easily visible or cacheable will work to disable that "hole"... (IMNSHO)...

Comment https "everywhere" is 4 websites, not so much usrs (Score 1) 44

https on "social" sites (non bank/finance/medical...etc ones traditionally needing encryption), mostly benefits the site -- not so much most user. It usually harms users more than not as it prevents content caching and local-filtering. On a https site, I can cache near zero in my squid proxy (used by more than one account & user). That allows much tighter tracking of individuals as they go from site to site.

On news and discussion sites, I can easily get over 25% of the requests satisfied locally -- and housemates notice the difference -- especially on things like heavily thumbnail'd sites like youtube.

Think twice about https everywhere, as it ends up with the standard being that gateway owners (companies or individuals) need to install ways to overcome that.

That harms "sensitive-sites" that really should use https, (finance, medical, etc).

Comment Re:FB is a de facto monopoly, just like Microsoft (Score 1) 65

I was wondering how FB's actions are not anti-competitive, but it's because they don't pretty much own the market like MS did at that time. Due to the ultra-conservatives having disposed of the old FTC and replaced them with people who's only qualification was to support the, then, current administration, it makes it less likely that they would even know what to do if they wanted to.

Slashdot Top Deals

Nothing is more admirable than the fortitude with which millionaires tolerate the disadvantages of their wealth. -- Nero Wolfe

Working...