Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Simple and obvious solution (Score 2) 241

Just like .edu, .gov all require valid certification (to a degree) for ownership, they could simply institute a new TLD where the registry requires ID validation, and prohibits all privacy services for WHOIS information. Enforce a strict contact availability policy, and you have as good of a system as you can pragmatically setup. As an opt-in TLD, no one would be forced to sacrifice their privacy for their current TLDs, and the sites that want to be legitimate sources of information can host their content on their verified domains.

I don't for a minute think this addresses the problem of the masses believing everything they read on traditional .com sites -- and also especially on social networks. But going this route could potentially improve the accessibility of credible information for those that can be bothered to source-check.

Comment Re: I'm Confused (Score 5, Informative) 111

TFA mentions that:

8 Issue R: Purchase of StartCom (Nov 2015)

So it happened less than a year ago. What you researched 18 months ago was probably legit. The acquisition happened after your issuance. That said, having been a long time user of StartCom/StartSSL, I find this is depressing it's gone this route. But I've moved on to LetsEncrypt recently anyways, since the StartSSL website was a royal PITA to use, and LetsEncrypt works much more fluidly.

Sad, but time to move on, I guess.

Comment Not really ready for prime time (Score 5, Informative) 123

I've been holding my breath for a long time for this, and it's pretty disappointing to have to say... This is really not ready for real use -- at least for most non-trivial use. For example, I can't easily get a MySQL connector to work, since it's meant for .NET 4.x and not Core. The majority of packages I use in my projects don't support Core. Obviously this takes time, and without Core being live, it would have less priority for package maintainers to actually support Core. That's understandable. But it's just hard to do anything useful with it, and as a developer, it's highly frustrating to not be able to do something that should be so fundamental like importing 3rd party packages. The new CLI toolset is a bit weird, and it's a few steps backwards of what they were proposing of being able to do, like save and reload (quickly) -- but I suppose that for now, I should just be celebrating that they're headed in the right direction... Maybe.

Comment Re:Amazon Silk + SSL = MITM? (Score 1) 249

The RFC you linked to points out: in a proxy situation, this establishes a secure connection between you and the proxy (between proxy and target site is undefined). If you want end-to-end TLS, it states you must use CONNECT to create a tunnel.

I can't imagine Amazon would funnel TLS encrypted connections through AWS using this method, since the whole point of Silk is to analyze/cache/preload the content (end-to-end crypto would break this ability). If they couldn't read your HTTPS data, it would be less latency for you and cheaper for Amazon to have the client connect directly. Their Help site makes it sound like proxy/cached mode is the default setting, so IMHO it still is effectively a man-in-the-middle.

Thankfully, it looks like you can disable it (or use a different browser), so I may just be paranoid for no reason.

Comment Amazon Silk + SSL = MITM? (Score 5, Insightful) 249

Cross posting from my old comment. As per their help:

What about handling secure (https) connections?
We will establish a secure connection from the cloud to the site owner on your behalf for page requests of sites using SSL (e.g. https://siteaddress.com/ ).

So essentially, they become the man-in-the-middle so they can better cache your HTTPS content? And their browser is programmed to show this is acceptable/secure... What kind of privacy implications does this introduce? Even if their privacy policy says they won't use the data maliciously, cloud computing isn't a bullet-proof system (i.e., leaks, hacking incidents, etc.). Call me paranoid, but if I read this right, this sounds like a frightening idea.

Comment Amazon Silk + SSL = MITM? (Score 2) 521

As per their help:

What about handling secure (https) connections?
We will establish a secure connection from the cloud to the site owner on your behalf for page requests of sites using SSL (e.g. https://siteaddress.com/ ).

So essentially, they become the man-in-the-middle so they can better cache your HTTPS content? And their browser is programmed to show this is acceptable/secure... What kind of privacy implications does this introduce? Even if their privacy policy says they won't use the data maliciously, cloud computing isn't a bullet-proof system (i.e., leaks, hacking incidents, etc.). Call me paranoid, but if I read this right, this sounds like a frightening idea.

Comment Alternative Solution: Implement it Right? (Score 5, Insightful) 354

There's all this talk of URL shortening services - whether third-party, or in-house implementation.

The question here is this: Why are the URLs so long to begin with?

Why does it have to be:
http://shiflett.org/blog/2009/apr/save-the-internet-with-rev-canonical

A full title in the URL is, IMHO, a very inefficient idea. The excuses I've heard are:

Search Engine Optimizations (better performance when keywords are in the URL)
Okay, I can't argue that some search engines do stuff like that. But shouldn't the TITLE or META tags have more bearing on this than how ridiculously long the URL is?

"The URL has meaning, so you know what you're clicking", Context, etc.
I suppose that when I see a URL like
http://shiflett.org/blog/2009/apr/save-the-internet-with-rev-canonical
as opposed to something like
http://example.org/blog/526
I would have a slightly better idea of the article's content before clicking on it. But then again, I can't really say that I've decided against clicking on a link just because of the link URL. I would, instead, decide whether I'd want to visit the link by its link text/description.

So <a href="http://example.org/blog/526">blog on link shortening</a> would still have the same effect on me as a long URL IMO. If it were bookmarked, the same rules would apply.

Hell, if I were handed an obfuscated shortened URL without context, I'd know even less of what I was getting myself into.

I think the proper solution is to just stop making ridiculously long URLs to begin with, so we don't have to rely on obfuscation/hashing/shortening to accommodate services that have character limit restrictions. And we'd save bandwidth too, apparently. Win-win?

Slashdot Top Deals

In a consumer society there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy.

Working...