Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Lol whut? (Score 1) 175

The problem with the $50K commercial solution is that they want you to pay $50K next year too. If their software does what you want already, then that's a hard sell, so typically they persuade you by adding new features. For something like OpenSSL, new features mean new ways of introducing vulnerabilities, so are often the last thing you want.

Comment Re: Finally the disk drive can die (Score 1) 264

I've got a 1TB SSD in my laptop. It cost about as much as the 256GB SSD in the laptop it replaced, bought two years earlier (shorter than my normal upgrade cycle, but work bought this one). You can pick them up for around £300, which is more expensive than a hard disk, but not by much - about a factor of five, which is half the difference a couple of years ago. I still have the laptop before that, and even with a processor a few generations older it's still disk-speed limited in a lot of cases, whereas this one rarely is, so it's definitely worth it. That said, my NAS has 2 2TB spinning rust disks and I'm looking at replacing them with 4TB ones. I'd love to replace them with SSDs, but it's not yet worth the cost. I am tempted by a smaller eSATA SSD to use for L2 ARC and ZIL though...

Comment Re:Oh goody (Score 2) 264

Just because it uses a swap file doesn't mean it ever writes to it. A lot of operating systems have historically had the policy that every page that is allocated to a process must have some backing store for swapping it to allocated at the same time. If you have enough RAM, however, this most likely won't ever be touched. If you're actually writing out 100GB/day to swap then you should probably consider buying some more RAM...

Comment Re:But is it even usable? (Score 2) 208

You're assuming that you'd want to create a full tape's worth of backups. If you're generating 1.4TB/hour of data, then you might have this problem. If you're only generating a few GB/day then it's quite easy for your weekly backup to run in under an hour and just append to the tape. Periodically you swap over to a new tape.

Comment Re:By way of context... (Score 1) 208

I've just looked. I can get Quantum MR-L5MQN-01 LTO-5 tapes with a native (uncompressed) capacity for about £20. The sweet spot for disks right now seems to be 3TB at about £80, so that's twice as expensive per TB, but only if you don't include the tape drive. An LTO-5 drive costs about £1,300. To save that much I'd need to be using about 100TB of storage, which is a fairly small filer for a business, but an insane amount for a home user. Shopping around a bit, I can find some LTO-5 drives for about £800, which brings it down to closer to 50TB, but still far more than I need to back up.

Of course, if I were backing up that much, then I wouldn't want a single drive, I'd want a tape library. And once they're out of the library, tapes are a lot more durable.

Comment Re:NetBSD time_t (Score 1) 128

On x86, you can (now) use the x32 ABI to get the same effect. The problem comes when you need to run one or two 64-bit binaries. Now they are pulling in a different libc and so on and the extra i-cache churn from multiple copies of the same library can quickly offset the reduced d-cache churn from smaller pointers (main memory usage is largely irrelevant: it's rarely a bottleneck and the average 5-10% saving from reduced pointer size is in the noise).

Comment Re:You are correct (Score 1) 128

Especially anything that used threads. Going from a strongly ordered x86, where basically anything is sequentially consistent for free, to the extremely weakly ordered Alpha, where things are only visible between threads with explicit barriers breaks a lot of stuff where people only tested on x86. ARM has a similar problem.

Comment Re:Damn OpenBSD (Score 1) 128

Pretty much all 64-bit systems have used 64-bit time_t forever, so the Y2038 problem is only an issue if people are still using 32-bit platforms in 24 years. Given that even ARM is now 64-bit, that seems quite unlikely (none of those old mainframes that were a problem for Y2K have this problem and most databases use 64-bit time values because people care about dates further in the past than can be expressed with a 32-bit UNIX time_t). Of course, Google has just released a new Java implementation for Android that does a load of void* to int32_t casts all over the place and is going to be almost a total rewrite to port to a 64-bit architecture, so you can't always trust big software companies not to be complete idiots...

Comment Re:YAY for BSD (Score 2) 128

Not true. It would have done if OpenSSL hadn't used a custom allocator, but the use of the custom allocator bypassed the policy in OpenBSD's malloc() that aggressively returns unused pages to the OS and causes this kind of fault. And why does OpenSSL have this custom allocator? Because without it people complain that malloc() implementations like the one in OpenBSD are too slow...

Comment Re: YAY for BSD (Score 2) 128

FFS2 is basically the original Berkeley FFS (also known as UFS, but there are at least half a dozen incompatible filesystems called UFS, so that just gets confusing) with some extensions. It basically just increases the size of various fields in the inode data structure so that various limits are much larger. I'm not familiar with the OpenBSD implementation, but on FreeBSD it also supports soft updates (where metadata and data writes are sequenced so that the filesystem is aways consistent, although fsck may be required to clean up) and journalling. Aside from that, it's a fairly conventional inode-based FS. If you want snapshots, FreeBSD provides them at the block layer via GEOM (I don't know what the OpenBSD equivalent is).

In contrast, ZFS rearranges all of the layering. At the lowest level, you have a set of physical devices that are combined into a single virtual device. On top of this is a layer that's responsible for storing objects and providing a transactional copy-on-right interface to the underlying storage. On top of this, you layer something that looks like a POSIX filesystem, or something that looks like a block device (or, in theory, something that looks like an SQL database or whatever).

For the user, this means that a load of things are easy with ZFS that are hard with UFS:

  • Creating snapshots with ZFS is a O(1) operation.
  • Creating new filesystems with ZFS is about has hard as creating directories.
  • Filesystems all have block-level checksums, can have multiple copies of files (if they're used for important stuff) on a single volume.
  • Compression and deduplication can be enabled on a per-filesystem basis. With UFS, there's no deduplication (although it would be possible to write a block-level dedup implementation for GEOM), and compression is handled at the block device layer.
  • You can delegate the rights to create and modify filesystem properties into jails safely with ZFS (not relevant to OpenBSD, as it lacks jails).

Comment Re:So what? (Score 2) 272

Emigration to space never makes sense once you do the maths. The escape velocity of Earth is 11.2 kilometers per second. Assume that a human is around 100kg, the energy required to accelerate the human to escape velocity (assuming 100% efficient propulsion and no support equipment required) is around 6.2GJ, or 1.7MWh to put it into a more consumer-friendly terms. The average American (to pick the country with the highest per-capita energy consumption) uses around 87kWh per year, so the cost of getting a human away from Earth, assuming perfect conditions, is around 20 times their energy consumption living on Earth for a year. Even assuming a space elevator and the most optimistic efficiency numbers, getting into space for less than your lifetime total energy consumption on the ground is difficult.

And that's just the economic argument. The population growth rate is currently sitting at about 1% per annum. That means about 70 million more people are born every year than die. For exporting people into space to be feasible for reducing the population, you need to ship 70 million people into space per year, or around 200,000 per day. That's in the same ballpark as the total number of air passengers today, including short-haul flights.

Combining these two, the total energy cost is 340GWh (1.24PJ) per day, or 126TWh (450PJ) per year. To put that in perspective, the total energy consumption of the world in 2008 was around 140,000TWh, so you're only talking about 1% of the total energy consumption of the world for your colonisation project - assuming theoretically impossible technology and that everyone goes naked. It typically takes a minimum of ten times as much mass for life support equipment as for passengers, so now you're up to 10%. Even optimistic efficiency numbers bump this closer to 50%. If you actually want them to go somewhere with enough equipment to do something vaguely like colonisation, then you're up to over 100% the total energy production of the world today and a throughput of 2-3 people boarding every second constantly, all day, all year round.

A more compelling argument is that having some self-sustaining colonies in space means that a global catastrophe won't kill all humans. We're still a long way away from being able to build one though, and it's not clear that investing in things like the ISS are actually taking us in that direction. Just as NASA likes to tout how spin-offs from space research have helped other industries, significant improvements in technology used in space have come from elsewhere.

Comment Re:do we need more shitty scripters? (Score 2, Insightful) 92

kids who don't understand pointers

There are two things that this can mean: Do they understand the concepts of indirection and aliasing, or do they understand the concept that memory is addressed by numbers? The former is important to pretty much any programming problem, but can be taught in any language that has references (including Ruby, Java, and so on). The latter is only really important to people doing kernel or embedded programming.

Comment Re:It's a turd that's slowly being polished (Score 1) 435

You probably haven't seen truly bad C code then. Really bad C code tries to do C++ templates using the preprocessor, where you define a few macros variables and then include a file that recursively includes other files to a depth of four or more, eventually resulting in the code that you wanted. The one thing this has over C++ templates is that you can read the output of the preprocessor and look for the bugs. Good luck fixing them thouhg.

Comment Re:What's the problem? (Score 2) 1198

I disagree. It's very easy and intellectually lazy to say 'we should give the state the right to torture people to death, because look how bad this person is! Surely they'd only use it on someone that bad'. It's the same line of reasoning that says that the state should be granted warrantless wiretapping rights, because surely they'd only use them to go after terrorists. And maybe pedophiles.

If you're not okay with the state having a license to torture, then it doesn't matter how bad the person they're torturing is. If you are... then I hope I don't live somewhere where you're allowed to vote.

Slashdot Top Deals

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...