Follow Slashdot stories on Twitter


Forgot your password?

Comment: Re:So is the Internet Archive just a piracy site n (Score 3, Insightful) 198

by radish (#48752019) Attached to: Adds Close To 2,400 DOS Games

And you're clearly going to be shocked if you ever learn how a library actually works.

Hint: the books (and CDs, and DVDs, and games) on the shelves are legally purchased copies, and are lent to a single patron at a time. They are not printouts of torrented epubs.

I love the Internet Archive but I seriously have no idea what they think they're doing here.

Comment: Re:Missing from my iPhone (Score 1) 421

by radish (#48744685) Attached to: What Isn't There an App For?

You should look at DLNA more closely (note it's a certification of UPnP so you'll see things listed under that category too). It's very common, there are plenty of FOSS clients and servers (here's a small list), and it's been around for years. It does not require any new hardware - most devices & software clients capable of streaming media already support it (check the page I linked - something like 18000 models). It seems like you're raging against something which does exactly what you want - allows you to easily stream your local content to local or remote devices over an open & cross platform protocol.

The reason devices are less likely to support SMB is that DLNA exists, is easier to implement, and provides a better user experience. There's literally no reason (that I can think of) to use SMB.

Comment: Re:Inexplicable gaps in Crypto products. (Score 1) 421

by radish (#48734249) Attached to: What Isn't There an App For?

Well I've no idea what this has to do with smartphone apps, but I'll bite.

1) Most public key products do use symmetric encryption for actual data transfer. The public key bit handles mutual authentication and the generation and exchange of the symmetric key. Your approach does this ahead of time, by throwing a crap ton of them in a file and copying it to the remote host (via what, sftp?).

2) The advantage of public key crypto is that there is (or should be) precisely one copy of my secret (the private key), so I have some hope of being able to control it. In your approach there is one copy per host. In a non trivial deployment managing that file to keep it (a) private and (b) current is going to be extremely difficult. All I need is one copy of that file (or a portion of it) and I can snoop any channel and modify any message in transit. The use of UDP is puzzling as I'm pretty sure that makes message tampering even easier (although I'm not enough of an expert to say that for certain).

3) I don't see the point of the passwords/hashes on top of the keys. If I have the key I can communicate with you, if I don't I can't. Adding another secret which is in the same file as the key doesn't seem to add anything (for one thing, if I have the key and can listen in on messages I can easily extract the passwords as they fly by).

4) All the stuff about file "copy numbers" is meaningless as you are trusting the peer to tell you honestly which copy it has. Rule number 1 in network security is you never, ever, trust the other side. Listener copy numbers are "256 and up" so I can just make up a random number in the 100000 range and I'm very unlikely to collide with yours, so the check passes trivially.

5) There's no host level identity. How do I know I'm talking to the host I think I'm talking to? All someone with a copy of the key file has to do is change the copy number and they can masquerade as any host on the network (with an appropriate DNS/IP spoof or whatever). SSH prevents that because knowing one host's signature doesn't help you guess another.

6) There's no user level identity. Who is logging in to this box? Are they actually allowed to do so?

7) Changing the keys all the time is pointless. Assuming I'm using a good cipher, extracting the key from the encoded stream should be essentially impossible, so changing it likely won't improve security. Moreover, if I have one of your keys I probably have all of them, so changing it won't stop me. Further, having to allow for clock skew introduces complexity which is potentially exploitable. If you were generating random session keys dynamically and exchanging them out of band somehow then periodic rolling wouldn't be a bad idea (because I'd have had to crack the crypto to figure out the first key. and now I have to start all over again).

There's more I'm sure, but it's late :)

Comment: Re:Missing from my iPhone (Score 1) 421

by radish (#48734081) Attached to: What Isn't There an App For?

SMB streaming is a pain because you have to deal with whatever formats you might encounter, plus you have to maintain a local index of content etc if you want to provide any decent kind of UI. Every SMB based streaming device I've used (including very expensive ones) has sucked. DLNA is a much better bet as the server can abstract away all the complexity, and there are a bunch of dlna client apps for ios.

Comment: Re:This is why I like Python so I can use OOP or n (Score 1) 303

by radish (#48728079) Attached to: Anthropomorphism and Object Oriented Programming

There's absolutely nothing stopping you writing procedural code in Java, just put everything in one class and mark all your methods as static. Of course if you're going to start interacting with the class library you'll have to bend to it's way of thinking but that's not a _language_ thing. Of course I don't recommend doing that, but it can be done.

This is why an experienced developer has multiple tools at her disposal - Java is great (IMHO) for a lot of things, but I'll pull out Ruby or Perl for some stuff, C# for others (e.g. when I want a native windows UI), Scala for yet more. There is no one size fits all, and just because one tool doesn't do everything doesn't make it useless.

Comment: Re:Buy two... (Score 1) 190

My 5-ish TB of data over at Crashplan begs to differ (and yes, I have a local copy as well).

Mirrored drives are not a good idea for data protection - for one thing an accidental delete (or overwrite, or ransomware, or whatever) will take your data out completely and instantly. Much better to do incremental backups at the file level, so you can restore deleted or damaged files from whenever you want in their history. Even if you don't want to pay for the cloud service, the crashplan software will do this very nicely to any target server.

Comment: Re: The difference is that THERE is evidence (Score 2) 82

by radish (#48621705) Attached to: Manufacturer's Backdoor Found On Popular Chinese Android Smartphone

What you're saying basically boils down to "in the end you have to trust the people who wrote the OS or built the device". Yes, yes you do. This article is an example of how one such group abused that trust. Of course Apple and Google could do the same, but absent of any evidence that they have done so saying they could is kind of redundant.

Comment: Re:Creators wishing to control their creations... (Score 2) 268

And I honestly don't think Microsoft are trying to control what you do with their software. At least, I've never seen anything like that. All the licensing stuff is about proving you actually did buy it, and thus proving that the first sale doctrine even applies. It's a nuisance for sure, but I'm not sure what the alternative is. That said, as a 20+ year user of their products I've had to call for a license activation precisely once and it took maybe 60 seconds. I can live with that.

Comment: The biggest problem is mobile (Score 1) 39

by radish (#48175855) Attached to: How Whisper Tracks Users Who Don't Share Their Location

I've worked with MaxMind stuff on mobile IP location - as they guy says it's pretty useless. If the user is on wifi it's not too bad, at least the IPv4 stuff could pretty reliably get the state and often city. I never had any luck with IPv6 although they claim to support it better now.
The big kicker is if the user is on cellular - at least in the US most cell networks are natively IPv6, and they tunnel connections through giant NAT devices. This leads to two interesting effects - firstly the IPv4 address you see on the server is located at some random data center usually on the other side of the country from the user. Secondly, the IP (and therefore the data center) keeps changing - sometimes multiple times within a few minutes. Doing any kind of tracking leads to a device which appears to keep hopping back and forth between California and Kansas.

This Microsoft Research whitepaper talks more about these issues.

(and before anyone jumps on me for the privacy implications of trying to do this - in my specific case it was tracking devices in an enterprise environment for security purposes and everyone involved had given informed consent)

Comment: Re:Local Backups (Score 1) 150

by radish (#48152389) Attached to: If Your Cloud Vendor Goes Out of Business, Are You Ready?

Why do they have to be exclusive options? I backup locally to a server under my desk, and remotely to the cloud. In the (more likely) event of an HDD failure I can restore as fast as my server can spit the data back out and be up and running in a few hours. In the (less likely) event of a catastrophe like a fire it might take a while to restore everything but at least it's not gone forever (and if I'm willing to pay they'll fedex me all my data on a drive). If the cloud provider go bust I still have my local backup and I can switch to a new offsite provider.

FWIW I pay around $12 a month for unlimited off site storage (and currently use maybe 4TB) - this is with crashplan. If you have anything remotely valuable it seems like an obvious thing to do for a little more peace of mind.

Comment: Re:Why do people still care about C++ for kernel d (Score 1) 365

by radish (#48068637) Attached to: Object Oriented Linux Kernel With C++ Driver Support

Our memory usage scales with load. Our load scales with usage. Predictions about growth in popularity of our product are all very well, but no excuse for not monitoring for impending doom

Of course. But testing will tell you something like "a single instance with a 32GB heap will support 9000 tx/sec with acceptable 99.9% latency". So you can monitor traffic levels and scale out as appropriate well before something monitoring GCs starts seeing problems. Where I work we deal with request rates in the 100k/s range and so if things go wrong they do so very fast - the trick is to know the limits and stay well away from them!

(especially since we have some legacy code that doesn't scale horizontally and so we have to keep throwing more memory at the problem for those services until we can fix that).
Oh fun :) Be wary of getting too big. I'm a JVM fan but if you start going above 100GB you need to be careful - GC pauses can start getting extremely significant and tuning new/eden becomes very important. Over 200GB and you're bleeding edge. If you have the budget look at Azul - their stuff is amazing.

Neckties strangle clear thinking. -- Lin Yutang