Please create an account to participate in the Slashdot moderation system


Forgot your password?
Back for a limited time - Get 15% off sitewide on Slashdot Deals with coupon code "BLACKFRIDAY" (some exclusions apply)". ×

Comment Re:Cyberwar may be neccesary as a learning moment (Score 1) 57

Not so sure about that.
We've had the internet, up alive and working for what, going on 20 years now? With no major outages? Redundancy every step of the way? I think it's OK to assume that the Internet will be around excepting armageddon or maybe a world war.

That said, any one system can be taken offline by targeting it specifically.

Comment Answers (Score 5, Informative) 77

Where do I perform hashing (smartphone/web client or server)?
You hash twice, with different salts - once on the client side and once again (i.e., hash the hash) on the server side. The doubly-salted, doubly-hashed password is the one you store.

What hash algorithm should I use?
You said it yourself - bcrypt. bcrypt allows you to set a cost, which increases password brute-forcing difficulty but also increases computational cost on every verification. Set the cost to be the maximum you can handle - if you have a stronger computer and fewer users, you can set a higher cost.

How do I store the hashes?
Chrome uses encrypted SQLite for browser saved passwords. Which encryption depends on the platform - Windows has CryptProtectData, KDE and Gnome have keyrings. The basic idea for all of these is to use some symmetric encryption algorithm (e.g. AES) with the key derived from some set of hashes on machine-specific data, like hardware serial numbers. If you want to go hardcore, use a hardware encryption dongle (HSM).
Note that it is important to encrypt the file on disk, but it is also important to make sure that decrypted hashes stay in server memory for as little as possible.

How can clients recover forgotten passwords?
They can't recover forgotten passwords - you're only storing hashes, remember? What they can do is reset their password. Two factor authentication is best (a verified email account and phone number, if you can send SMSes or automated calls), but at least email and a security question seems to be the standard.

Comment But the routers themselves suck (Score 1) 94

We are three sharing an apartment, with three laptops, a Raspberry Pi, three phones, and the occasional guest. We've gone through several D-Link and TP-Link routers. The WiFi quality sucks, there's crappy, dropping reception 5m (15 ft) from the router beyond a wall.

What router can we buy? Do Open/DD-WRT affect performance?

Comment Here's the thing though... (Score 1) 236

It's not really that hard for a bad guy to buy a cop costume. Humans can't tell them difference between the police and some random jackass. Also, if a guy is standing in the middle of the road signaling you to stop, you're gonna stop just to not run him over.

I think self-driving cars should be treated as taxis. Just like you can't expect your taxi driver to disobey a cop, nor can you expect your SDC to.

Comment Re:The simple Economics of it all: (Score 1) 185

Where's your math? This whole 5-point score rant is basically a big long ad-hominem argument, with not even a single link to back up your claims (who disagrees with Gavin?...).

If you want more transactions per minute, you're going to need a higher limit; a higher limit puts more stress on the nodes and the network. That's where the argument lies.

Adoption rises and technology progresses, so, from continuity, there is some point in time where the higher stress is not as much an issue, and we will need the room for more transactions. Gavin et al say that point is not far, and we should take action now to avoid problems later. I think hearing the arguments so far, I agree with them.

Comment Re:Why is the limit a problem? IS it a problem? (Score 1) 185

It's obvious that if you want to be able to have more transactions/minute, the block size limit will have to go up. Everyone knew it had to happen sometime.

Check out this thread:

Back then, 2013, large block sizes (granted, occuring once in a few weeks - not much considering there's one block every 10 minutes or so) reached 900k and even 990k. We're two years later, adoption goes up, and two core maintainers think it's about time we raise that limit.

Why not? Why wait for the problems - in the form of high processing fees and higher waiting time for transaction approval? Now's as good as it's ever gonna be.

Comment We're actually better off (Score 1) 95

We used to have applications run locally. They used to have a lot more freedom - any and all apps could know exactly who you are and what your computer's UUID was, not only how your battery's doing. Today most of what you use - the obvious examples being your mail and to a lesser extent office suite - is at least sandboxed inside your browser.

This is not to say there hasn't been a rise in tracking, but the story just got me thinking that maybe it's a good thing it's being done in a browser.
(And you should be whitelisting the use of cookies and javascript - and blocking unnecessary trackers).

Intel CPUs are not defective, they just act that way. -- Henry Spencer