Forgot your password?
typodupeerror

Comment: Hmmm. (Score 1) 41

by jd (#47921793) Attached to: Astronomers Find Star-Within-a-Star, 40 Years After First Theorized

If Kip Thorne can win a year's worth of Playboys for his bet that Cygnus X1 was a Black Hole, when current theory from Professor Hawking says Black Holes don't really exist, then can Professor Thorne please give me a year's subscription to the porno of my choice due to the non-existent bet that this wasn't such a star?

Comment: Re:Sounds stupid. (Score 1) 296

by jd (#47877989) Attached to: WD Announces 8TB, 10TB Helium Hard Drives

I've a very good idea that RAM prices are artificially inflated, that the fab plants are poorly managed, that the overheads are unnecessarily high because of laziness and the mentality in the regions producing RAM.

I'm absolutely certain that 15nm-scale RAM on sticks the same size as sticks used today would cost not one penny more but would have a capacity greater than I've outlined.

It could be done tomorrow. The tools all exist since the scale is already used. The silicon wafers are good enough, if they can manage chips 4x and 9x the size of a current memory chip with next to zero discards, then creating the far smaller dies (so you can discard more chips and still get the same absolute yield) is not an issue. It would reduce idle time for fabs, as fabs are currently run semi-idled to avoid the feast/famine cycle of prior years but 15nm would let them produce other chips in high demand, soaking up all the extra capacity.

What you end up with is less waste, therefore lower overheads, therefore higher profit. The chip companies like profit. They're not going to pass on discounts, you getting a thousand times the RAM for the same price is discount enough!

Comment: Re:10TB of RAM? (Score 1) 296

by jd (#47877957) Attached to: WD Announces 8TB, 10TB Helium Hard Drives

Not really. RAM is only expensive because of the transistor size used. Fab plants are expensive. Packaging is expensive. Shipping is expensive. Silicon is expensive. If you add all that up, you end up with expensive products.

Because fab plants are running very large transistor sizes, you get low yields and high overheads.

Let's see what happens when you cut the transistor size by three orders of magnitude...

For the same size of packaging, you get three orders of magnitude more RAM. So, per megabyte, packaging drops in cost also by three orders of magnitude.

Now, that means your average block of RAM is now around 8 Tb, which is not a perfect fit but it's good enough. The same amount of silicon is used, so there's no extra cost there. The shipping cost doesn't change. As mentioned, the packaging doesn't change. So all your major costs don't change at all.

Yield? The yield for microprocessors is just fine and they're on about the scale discussed here. In fact, you get better. A processor has to work completely. A memory chip also has to work completely, but it's much smaller. If the three round it fail testing, it doesn't affect that one. So you end up with around a quarter of the rejection rate per unit area of silicon to a full microprocessor.

So you've got great yield, same overheads, but... yes... you can use the fab plant to produce ASICs and microprocessors when demand for memory is low, so you've not got idle plant. Ever.

The cost of this memory is therefore exactly the same as the cost of a stick of conventional RAM of 1/1000th the capacity.

Size - Exactly the same as the stick of RAM.

Power budget - of no consequence. When the machine is running, you're drawing from mains power. When the machine is not running, you are refreshing the dirty bits of memory only, nothing else. And 99.9% of the time, there won't be any because sensible OS' like Linux sync before a shutdown. The 0.1% of the time, the time when your server has been hit by a power cut, the hard drive is spun down to save UPS and the main box is in the lowest possible energy mode, that's when this sort of system matters. Even on low energy mode, buffers will need flushing, housekeeping will need to be done, transactions will need to be completed. This system would give you all that.

And the time when the machine is fully powered, fully up? Your hard drive spends most of its time still spun down. Not for power, although it'll chew through a fair bit - mechanical devices always do and the high-speed drives being proposed will chew through far, far more. They'll be spun down because a running hard drive suffers rapid deterioration. Can you believe hard drives only last 5 years??! Keep the damn thing switched off until last minute, then do continuous write. Minimizes read head movement (there's practically none), minimizes bearing wear-and-tear, eliminates read head misalignment (a lot of times, you can write the entire disk in one go, so what the hell do you care if the tracks are not perfectly in line with the ones they're replacing?) and (by minimizing read head time over the drive) minimizes the risk of a head crash.

I reckon this strategy should double the expected lifetime of drives, so take the cost of one 10 Tb drive and calculate how much power you'd need to consume extra for the memory in order for the memory's power budget to exceed the value of what you're doing.

Oh, and another thing. Because I'm talking memory sticks, you only need to buy one, subsequent drives of the same or lower capacity would not need to have memory there. You could simply migrate it. RAM seems to hold up ok on old computers, so you can probably say that the stick is good for the original drive and the replacement. That halves the cost of the memory per drive.

So, no, I don't see anything unduly optimistic. I think your view of what the companies could be doing is unduly pessimistic and more in line with what the chip companies tell you that you should think than what the chip companies can actually do.

Comment: Re:Uhh yeah (Score 1) 108

by jd (#47877837) Attached to: Why Google Is Pushing For a Web Free of SHA-1

Agreed, which is why it should be there.

Nonetheless, there needs to be a backup plan in case it does turn out that the NSA or GCHQ have a backdoor to it. If it's been deliberately compromised (and I'm not keen on changes made AFTER it had been approved as SHA3 for that very reason), then the more paranoid amongst us need to have a backup plan. I certainly wouldn't suggest HTTPS over TOR use algorithms that are considered three-letter-agency-unsafe for any part of the security protocol, for example, since they're the ones doing most of the attacking.

There's no easy answer to this, but I think that having SHA3 and NESSIE as the two standard choices and limited support for some third algorithm for when approval simply isn't good enough is the only real solution. The first two can be standard on all browsers and by all certificate authorities, the third only needs support on special-purpose browsers and OpenCA/OpenSSL/LibreSSL (since most uber-secure sites will roll their own certs).

Comment: Sounds stupid. (Score 0) 296

by jd (#47867547) Attached to: WD Announces 8TB, 10TB Helium Hard Drives

High capacity I can understand, but high speed is senseless. At current transistor sizes, you could easily have 10Tb of battery-backed RAM on a hard drive. You can then peel the data off the hard drive into RAM and write changes when there are enough or when a sync command is sent. RAM doesn't eat battery significantly, it only needs to maintain state and then only on dirty portions. That'll easily buy enough time to survive power outages and Windows crashes.

If everything is in RAM, access times are insignificant for always-on machines (the ones likely to need 10Tb of disk space). Since writes can be postponed until critical, the disk can spend most of the time totally powered down.

Now, if you're REALLY clever, you have twice that RAM. One lot for working space (which doesn't need battery backing) and one lot for writing to disk. This second set can be permanently defragmented, with writes designed to be compact on space and the hard drive spun to specifically provide for that.

Comment: Re:Uhh yeah (Score 1) 108

by jd (#47867385) Attached to: Why Google Is Pushing For a Web Free of SHA-1

Microsoft will probably implement SHA0. There's no value in SHA2 (and variants) now that SHA3 has been ratified, since SHA2 is just SHA1 with some lengthening. If SHA1 is brutally compromised, SHA2 will fall shortly after. Best to switch to NESSIE (Whirlpool) and SHA3 (something that sounds vulgar).

Having said that, SHA3 involved dubious mid-contest rule changes and spurrious rejection criteria that might well have been NSA-inspired. I'd take a very close look at the Hashing Lounge for any second or third round reject that shows greater resilience across the board (pre-image vulnerabilities, etc) as a backup in case NESSIE and SHA3 are seriously compromised.

Comment: Hmmm. (Score 2) 230

by jd (#47829915) Attached to: Did you use technology to get into mischief as a child?

I deny all knowledge about the epson fx spontaneously catching fire.

The short circuit that blew up two power transformers and an embedded computer had nothing to do with me. And you didn't see me. And I was in disguise anyway.

Nobody saw me insert the radio direction finder valves into the R1155, switch it on and jam all televisions in the neighbourhood.

So, no, I've no knowledge of using technology to get into trouble. None whatsoever.

Comment: Re:Media (Score 2) 455

by jd (#47773019) Attached to: Should police have cameras recording their work at all times?

Yeah, I can see you do great on statistics, too.

Death stopped being binary some years back (suggest you read medical news) but this isn't about that. This is simple numbers. If device X kills N times out of 100 and device Y kills M times out of 100, where N != M, the lethality of the devices is not the same.

Comment: Re:Media (Score 4, Insightful) 455

by jd (#47772299) Attached to: Should police have cameras recording their work at all times?

Cops are not doing a good job. Estimates range from 400-1000 unjustified deaths a year. To put it into context, since 9/11, there may well have been 4 times as many unjustified deaths by cops in America as unjustified deaths by Al Queda.

That isn't acceptable by any standards.

Or perhaps if you'd like, I can put it another way. There have been three times as many incidents of manslaughter and murder by American cop per capita of population than there have been incidents of manslaughter or murder in Britain in total.

That number is WAY unacceptable.

Cops carrying guns confer no benefit to those in the area (80% of bullets fired by police handguns miss their target, they don't vanish and they do hit passers-by, sound crew, hostages, etc).

Cops carrying guns confer no benefits to law and order, since alternatives from stun guns to pain rays (microwave stimulation of nerve endings, if you prefer) to teargas (which isn't great but is less lethal than a lump of lead) already exist and criminals are less likely to carry when running is a more practical option than a shoot-out. That has always been the British experience, which is why you now get regular shoot-outs where British cops are stupid enough to carry where you'd previously have had maybe one a decade versus an armed response unit.

Cops carrying guns confer no benefits to the cop, since dead weight can result a cop becoming dead, accidental shootings are very likely to produce retaliation, and "utility" belts stop utilizing when they terrify locals, intimidate visitors, but bolster thugs who gain greater mobility and dexterity from not wearing them.

Look, this is all very simple. Too simple for nutters, perhaps, but simple nonetheless.

First, preventing crime by eliminating prime environmental and psychological causes is a good start. If there's no crime, there's nobody to shoot and nobody shooting back.

Second, preventing cops turning bad by preventing them developing a "them vs us" attitude is essential and you don't achieve that by giving them scrutineering powers and not those they are scrutinizing. It has to be a two-way street to prevent that kind of mindset.

But that requires one additional ingredient to work properly:

Third, preventing cops turning bad by preventing them from being have-a-go heros. They should work with the community, be a part of the community, guard it from within. And, like all good guards, they should NOT be on constant alert. They should be constantly engaging on a social level, not a paramilitary one. If a crime happens, let the criminal go somewhere where there ISN'T a huge danger to others. Inanimate objects can look after themselves, people need a bit more effort.

It is better to let a gang "get away" from the scene, with no bullets fired, be tracked safely and then be apprehended INTACT when it is safe to do so. Going in there guns blazing will cause excessive damage, risk the lives of those supposedly protected and served, and for what? Some carcases. No trial, no determination of the chain of events, no proof even that the dead body is the guilty party. It can't exactly answer questions in the dock, can it?

No, disarm the cops, give them high-res cameras (and maybe girls gone wild t-shirts, I dunno), and let them be what cops should be - good citizens. They are NOT the army, they should NEVER be allowed military-grade weapons, they should deal with matters calmly, quietly and sensibly.

If they're not capable of that, they're incapable of good. Of any kind.

Comment: If a ruggedized camera breaks (Score 2) 455

by jd (#47772225) Attached to: Should police have cameras recording their work at all times?

Then it wasn't an accident. Simple as that. People seem to forget that you can build these devices to withstand any force a cop's skull is likely to take, and more besides.

Storage is a non-issue because you don't need to store a lot locally. Local storage can be limited to the time the cop is outside of radio contact plus the time to clear enough buffer that no information is lost. So unless the cop is riding a motorbike in a cage, it's just not enough to create serious issues.

Battery will be a bigger issue. It'll take a lot of batteries to keep transmitting at a decent resolution. However, as cops with guns cause more trouble than they prevent, that's also easy to fix. Sufficient batteries will consume no more weight than a sidearm plus extra ammunition.

Actually, it might not be that bad. With the proposed mandate for vehicle-to-vehicle communication, a cop radio could turn the entire road network into a gigantic adhoc wireless network. You don't need as much power for a short-range transmission. Might as well get some value out of these stupid ideas.

Comment: Re:Developers prefer Ubuntu? (Score 1) 232

by jd (#47771989) Attached to: How Red Hat Can Recapture Developer Interest

Why would developers want/care about long-term support?

There are a tonne of packages out there that will grab source from a repository and compile in a root jail. You now have binaries for every permutation of dependencies ever produced. Test harnesses (you remember those, the things developers are supposed to use) can give you a list of regressions and compatibility bugs within minutes of a commit.

Long term support encourages developers to be lazy, to presuppose things that may not be true.

Developers are best supposing nothing, testing everything and isolating the conditional (which they should be doing anyway, good software design). If you don't have time to be competent, then you certainly don't have time to be incompetent. So find time.

Comment: Re:Developers prefer Ubuntu? (Score 0) 232

by jd (#47771969) Attached to: How Red Hat Can Recapture Developer Interest

As a developer, I categorically state I hate Ubuntu for development work. It is horribly sub-optimal, poorly organized and package management is unstable and space inefficient. It also doesn't run on several of my white box PCs. Very standard, old white boxes.

Red Hat is only marginally better on efficiency, but recovery is ugly.

Gentoo would be ok, except that compiler flags are a bother. I can't use utilities for using profiles to calculate optimal flags when those flags will vary down the dependency chain.

Linux From Scratch is good, it's essentially how I put together my own systems between the last of the MCC builds and the first Red Hat I considered tolerable enough.

Look, I don't expect miracles immediately. Only after the updates from the repository. There simply isn't any reason for so much broken code and suboptimal configs. Not when Ubuntu is run by a billionaire who can afford a few extra hard drives for high-end builds.

When the weight of the paperwork equals the weight of the plane, the plane will fly. -- Donald Douglas

Working...