Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re: Whips and manicles (Score 1) 209

If it's not an abacus, it can't count. Most of the rational people have quit fet due to database failures, update disasters, an incredibly primitive unthreaded discussion format and a contingent of highly abusive individuals. Abusiveness and primitiveness has done for tech forums, too, which is why Kuro5hin has been in death throes for some time.

A community is never stronger than the people who stand behind it and, in sadly far too many cases, the people standing behind the community are crouched down and in hiding.

Comment Re: Funny, I Left GNOME 3 Mainly Because of System (Score 2, Insightful) 403

Software that is designed correctly separates out what it does, how it does it, and how it interacts with the outside world.

Ergo, software that is correctly designed is user-agnostic. If the user thinks in a particular way, whatever that way happens to be, it is the job of the software to accommodate that. If it does not, it is not software for users, it is software that has users. Possession is everything.

Software that is correctly designed is configuration-agnostic. If the configuration file states something is enabled, then that is enabled. It is not the job of the software to say the file really means something else. If the configuration is broken, state how and why. Clearly. If the configuration is old, import and update. But don't tell me, or anyone else, what Joe Bloggs thinks would look better. I don't care. And the more other people's preferences get shoved in my face, the less I will care.

Theo clearly has the right idea - the only way to get past the morons is with an attitude of utter contempt. Bugger all else matters, apparently.

Comment I'm switching off Debian. (Score 0) 403

Linux-From-Scratch is easier to use, less user-hostile and less determined to tell me how to think.

ANY software that pretends to know better than me how I want things done is software that deserves to burn. And then sink into the swamp. It is that precise attitude that got me to kick the Windows habit and led me away from the early ix86 BSDs.

I not only think better than a mere machine, I think better than your average distro compiler. I can spec better, I can build better, I can test better. Debian had, up till now, been acceptable, the packages are convenient and it's no great pain to tune. Now, Debian ranks lower than Fedora. I'd recommend the MCC distribution before either and that was last updated during the Ice Age.

Comment Re: More great insightful summaries from /. - not! (Score 1) 76

I've used the site longer and reserve the right to use Doctor Who references where I'm suspicious of technical details, especially as relate to timing vulnerabilities. This is allowed, as per The Hacker's Dictionary. Bonus points for finding the Doctor Who references included.

Comment Re: Cursory reading (Score 1) 76

That was pretty much my interpretation as well. Which would be great for ad-hoc encrypted tunnels - the source and destination can have keys that are valid only until the tunnel's authentication expires (typically hourly) and where the encryption is based on the identity the other side is known by. Ad-hoc tunnels need to generate keys quickly and efficiently, but also don't need to be super-secure. In fact, they can't be.

If RIBE isn't useful in ad-hoc, then you'd end up having to ask when it would be useful.

Anything that depends on a third party, including PGP/GPG with keyservers, is vulnerable to some form of compromise, SSL/TLS certificates all have a third party signer and Kerberos depends on all kinds of behind-the-scenes work being secure. However, although they're imperfect, they're considered adequate for what they do. Well, except for SSL, perhaps.

RIBE presumably therefore also has a niche where it's good. Rapid key turnover is what's wanted for conversation-based protocols with timeouts. That makes RIBE sound promissing for IPSec ad-hoc and SSL, as it makes store and crunch by attackers less likely to work. But is that the right niche?

Submission + - New revokable identity-based encryption scheme proposed (plosone.org)

jd writes: Identity-based public key encryption works on the idea of using something well-known (like an e-mail address) as the public key and having a private key generator do some wibbly-wobbly timey-wimey stuff to generate a secure private key out if it. A private key I can understand, secure is another matter.

In fact, the paper notes that security has been a big hastle in IBE-type encryption, as has revocation of keys. The authors claim, however, that they have accomplished both. Which implies the public key can't be an arbitrary string like an e-mail, since presumably you would still want messages going to said e-mail address, otherwise why bother revoking when you could just change address?

Anyways, this is not the only cool new crypto concept in town, but it is certainly one of the most intriguing as it would be a very simple platform for building mostly-transparent encryption into typical consumer apps. If it works as advertised.

I present it to Slashdot readers, to engender discussion on the method, RIBE in general and whether (in light of what's known) default strong encryption for everything is something users should just get whether they like it or not.

Comment Hmmm. (Score 0) 72

If Kip Thorne can win a year's worth of Playboys for his bet that Cygnus X1 was a Black Hole, when current theory from Professor Hawking says Black Holes don't really exist, then can Professor Thorne please give me a year's subscription to the porno of my choice due to the non-existent bet that this wasn't such a star?

Comment Re:Sounds stupid. (Score 1) 296

I've a very good idea that RAM prices are artificially inflated, that the fab plants are poorly managed, that the overheads are unnecessarily high because of laziness and the mentality in the regions producing RAM.

I'm absolutely certain that 15nm-scale RAM on sticks the same size as sticks used today would cost not one penny more but would have a capacity greater than I've outlined.

It could be done tomorrow. The tools all exist since the scale is already used. The silicon wafers are good enough, if they can manage chips 4x and 9x the size of a current memory chip with next to zero discards, then creating the far smaller dies (so you can discard more chips and still get the same absolute yield) is not an issue. It would reduce idle time for fabs, as fabs are currently run semi-idled to avoid the feast/famine cycle of prior years but 15nm would let them produce other chips in high demand, soaking up all the extra capacity.

What you end up with is less waste, therefore lower overheads, therefore higher profit. The chip companies like profit. They're not going to pass on discounts, you getting a thousand times the RAM for the same price is discount enough!

Comment Re:10TB of RAM? (Score 1) 296

Not really. RAM is only expensive because of the transistor size used. Fab plants are expensive. Packaging is expensive. Shipping is expensive. Silicon is expensive. If you add all that up, you end up with expensive products.

Because fab plants are running very large transistor sizes, you get low yields and high overheads.

Let's see what happens when you cut the transistor size by three orders of magnitude...

For the same size of packaging, you get three orders of magnitude more RAM. So, per megabyte, packaging drops in cost also by three orders of magnitude.

Now, that means your average block of RAM is now around 8 Tb, which is not a perfect fit but it's good enough. The same amount of silicon is used, so there's no extra cost there. The shipping cost doesn't change. As mentioned, the packaging doesn't change. So all your major costs don't change at all.

Yield? The yield for microprocessors is just fine and they're on about the scale discussed here. In fact, you get better. A processor has to work completely. A memory chip also has to work completely, but it's much smaller. If the three round it fail testing, it doesn't affect that one. So you end up with around a quarter of the rejection rate per unit area of silicon to a full microprocessor.

So you've got great yield, same overheads, but... yes... you can use the fab plant to produce ASICs and microprocessors when demand for memory is low, so you've not got idle plant. Ever.

The cost of this memory is therefore exactly the same as the cost of a stick of conventional RAM of 1/1000th the capacity.

Size - Exactly the same as the stick of RAM.

Power budget - of no consequence. When the machine is running, you're drawing from mains power. When the machine is not running, you are refreshing the dirty bits of memory only, nothing else. And 99.9% of the time, there won't be any because sensible OS' like Linux sync before a shutdown. The 0.1% of the time, the time when your server has been hit by a power cut, the hard drive is spun down to save UPS and the main box is in the lowest possible energy mode, that's when this sort of system matters. Even on low energy mode, buffers will need flushing, housekeeping will need to be done, transactions will need to be completed. This system would give you all that.

And the time when the machine is fully powered, fully up? Your hard drive spends most of its time still spun down. Not for power, although it'll chew through a fair bit - mechanical devices always do and the high-speed drives being proposed will chew through far, far more. They'll be spun down because a running hard drive suffers rapid deterioration. Can you believe hard drives only last 5 years??! Keep the damn thing switched off until last minute, then do continuous write. Minimizes read head movement (there's practically none), minimizes bearing wear-and-tear, eliminates read head misalignment (a lot of times, you can write the entire disk in one go, so what the hell do you care if the tracks are not perfectly in line with the ones they're replacing?) and (by minimizing read head time over the drive) minimizes the risk of a head crash.

I reckon this strategy should double the expected lifetime of drives, so take the cost of one 10 Tb drive and calculate how much power you'd need to consume extra for the memory in order for the memory's power budget to exceed the value of what you're doing.

Oh, and another thing. Because I'm talking memory sticks, you only need to buy one, subsequent drives of the same or lower capacity would not need to have memory there. You could simply migrate it. RAM seems to hold up ok on old computers, so you can probably say that the stick is good for the original drive and the replacement. That halves the cost of the memory per drive.

So, no, I don't see anything unduly optimistic. I think your view of what the companies could be doing is unduly pessimistic and more in line with what the chip companies tell you that you should think than what the chip companies can actually do.

Slashdot Top Deals

Happiness is twin floppies.

Working...